Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For an intermediate library, it doesn't actually seem to be very much backend-independent. At least the desktop sample [1] has a disappointing amount of code paths with parallel implementations for OpenGL and Vulkan, switched at compile time using "#if USE_OPENGL_BACKEND".

I guess this means that this sample will use OpenGL rather than Metal on macOS? They claim there's a Metal backend, but how would one enable that for the "Tiny" sample? By manually adding a third parallel implementation with "#elif USE_METAL_BACKEND" everywhere?

[1] https://github.com/facebook/igl/blob/main/samples/desktop/Ti...



This looks like what happens when the "All abstractions are leaky" crowd go too far. It's an absence of abstraction to the point that the intermediary layer is not helpful enough, and will simply end up hidden behind another layer of obfuscatory gunk.

Graphics people need to face the fact that writing optimised cross platform renderers is not something that can be solved by divide/conquer into layers in this bottom up way anymore, instead you need to architect the data flow of the renderer and implement platform specific/optimal approaches for each sub part of that, which are so specific that this sort of wrapper would not help. This isn't exactly far removed from the pyramids -> gothic cathedral comparison.


I have been researching, and it is my current understanding that this is the approach that WebGPU takes, specifically wgpu.

Is that right? I have been thinking that wgpu is the best choice available for intermediate cross platform graphics one level below something like Skia and one level above Vulkan, OpenGL, DirectX, and Metal.


Reads to me like the usual collection of utility/glue that people build for their own needs? Those usually don't get open sourced, but it's also not wrong to do that. At least if you can resist the urge to announce it as the graphics API to end all graphics APIs...


> can resist the urge to announce it as the graphics API to end all graphics APIs

Please quote the text that made you believe this. It's a very negative take.


I did not intend to imply that they did. Sorry if that hasn't been clear. I was considering adding a few words in that direction but went with brevity.


You can build far better abstractions than what we have here. A lot of games that support more than two platforms have graphics abstractions that leak less than this.


Don’t you think this is already happening for most AAA production tech? Most high performance real-time stuff does go top-down. Mid-core stuff or art tooling might be fine with bgfx (or this I guess).


Those seem to be mainly platform-specific initialization things, like creating a window and rendering context. That's pretty normal for these types of rendering libraries which don't include a full blown portable windowing API.

If you use something like SDL, you'll probably be able to minimize the platform-specific stuff.


Even outside of platform-specific stuff, I'm seeing it all over the core rendering code. I'm not impressed. https://github.com/facebook/igl/blob/main/samples/desktop/Ti...


You're right, that's weird. The particular snippet you linked to isn't that egregious, it just creates a dummy 1x1 texture for some reason, and changes a hint. A lot of stuff in that file though is branched on that opengl flag. The `render` function is a huge mess, and it doesn't even seem like that demo supports Metal at all.

But tbf also, I'm just skimming the code base. Maybe they'll publish some better docs later that explain/justify these things.


It's a very weird one.

createRenderPipelines uses the branch to add #version 460 to the beginning of shaders (one would think the platform backend could do that for you, also this won't support GLES2 or GL3), and also makes the programmer build a sampler/uniformblock mapping table.

That's... perhaps needed if you're on GL3 because you don't have access to binding=N in the shading language and you don't want to do any shader parsing in the backend, but also, you're forcing it on GL 4.6, so... huh? Just use explicit binding in the shader and the binding index APIs.

The render function uses USE_OPENGL_BACKEND to adapt for the -1...1 clip space. Sure, again, glClipControl is more modern than your minspec, but you're already forcing GL 4.6, so WTF. Also, it's not hard to write device_->adjustProjectionMatrixForNativeClipSpace(); that does a matrix mul.

It also makes the shadow render target have a color attachment (wtf? depth-only targets are supported just fine in GLES2/GL3 to my knowledge), and it also... doesn't use an index buffer when rendering? (EDIT: This is because it's using a 32-bit index buffer, which GLES2 doesn't support. But it's a much better idea to split it into multiple 16-bit index buffer draws if required than drop the index buffer entirely... also, you know, shaders have 4.6). I give up trying to understand what's going on. Oh, and despite building the uniformblock mapping table from before, you still have to use glPipelineState->getUniformBlockBindingPoint? What on earth?

This does not impress me.


What really makes me sad is that there is enough of my brain space wasted on gpu esoterics that I can understand this comment.

GPU programming is insane and the devs have stockholm syndrome.


Ouch, it is really bad.

I only read the announcement by the time I submitted it, it is quite clunky.


I disagree. Most of what's platform specific is stuff like shaders, and host window stuff. I definitely do NOT want a graphics layer owning that - I may want to control the type of os window (say, for an audio plugin). I already know gl shaders, and metal is really similar, a new 'common denominator' shading language would just be another language - with less docs, stackoverflow posts, etc.

And on the other hand, you get abstractions for the stuff that really is similar, like command buffers, camera control etc.

I think the API is well designed, and nails flexibility together with performance, at the cost of requiring to have, at least, expert knowledge in one library. After that I would get chatgpt to convert my shaders to metal, Vulkan, etc.


I think the documentation could be a lot clearer about this. I can understand if the goal is for the abstraction to be thin and unopinionated, and that means you have to be an expert in each backend. But they don’t seem to explain this very well.


There is ios/Metal sample here that seems to support opengl: https://github.com/facebook/igl/blob/main/samples/ios/snapsh...

Thr shaders are backend specific but rest is mostly generic?


Same impression here. The triangle example isn't better than a raw Vulkan one. Then I thought maybe it'd have more value in larger applications, but the Bistro demo is just the same kind of leaky abstraction code all over the place. The GUI is also just ImGUI. I don't know what value this library provides.


I thought the very same when I looked at the Tiny sample, just a lot of #if conditional compilation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: