Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Meta releases Intermediate Graphics Library (khronos.org)
434 points by pjmlp on July 7, 2023 | hide | past | favorite | 209 comments


That scene that says "Copyright ©Meta" is actually a CC-BY scene from Amazon Lumberyard.[1]

I'm not sure rendering it entitles you to slap your own copyright notice on it while disrespecting the CC-BY. Further, the interface shown is just plain ImGui. That'd be as if I made an image viewer using off-the-shelf parts, displayed some other artist's work in it, then pretended I own the copyright to what my software now displays.

Obviously I don't. The very purpose of this software and image viewers alike is to display other people's work. What Meta owns is software, not the output it may produce.

These corporations are way too eager to slap their copyright notices on everything. It's definitely not a harmless mistake when those same corporations own literal armies of lawyers who protect their employer's "interests" while not necessarily understanding processes happening in other parts of the company.

In general anyhow. In this case it's luckily just Goliath vs. Goliath and not some poor indie developer getting shafted and robbed of credit.

[1] https://developer.nvidia.com/orca/amazon-lumberyard-bistro


If I take a picture of a Mickey Mouse(TM) figurine - I own the copyright to the photo. Disney will retain copyright to their model, but that doesn't mean they own the result of my work, even if its derivative.


Yes, but Disney also had a Registered Trademark on Mickey Mouse, so any usage of the photo has to be in accordance with that representation.


But I can still add a "(c) My Name" on the image, which is the question at hand


Having copyright does not serve as an exemption for the limitations of trademark/trade dress.


Yes, but that's an orthogonal argument to what this whole thread was about. Whether Meta has copyright on a render of a scene created by a different company.


Interestingly enough, the copyright on the 1928 movie “Steamboat Willie” – the short film that introduced the world to Mickey Mouse – will expire in 2024. That means the Steamboat-Willie-version of Mickey Mouse will enter the public domain.

TM != Copyright, but still interesting.


As I understand it, that’s why they’ve been using the little Steamboat Willie clip at the beginning of films for the last several years—to make it a trademark, which never expires while in use.


"Not if we have anything to say about that!" ~ Disney Lawyers and Lobbyists anxious to extend copywriter protections again


The general consensus is that the current Congress is much less amenable to that lobbying than Congress was back in 1998, the last time US copyright terms were extended – and that Disney realises that, so they aren't seriously pursuing it this time around.

One reason is that supporters of the public domain are much better organised than in the 1990s, and their cause has become a lot more popular and mainstream. For example, Wikipedia is a household name with a lot of money (the Wikimedia Foundation has over US$200 million of cash and investments), and they would lobby and campaign hard against any such a proposal if it was being seriously pursued.

In the 1990s, you had the film, television, publishing and music industries all supporting copyright term extension, and no serious corporate opposition to it – I doubt most big tech companies would support copyright term extension, because they get no benefit from it (all of their own copyrighted works are much more recent), whereas public domain works are actually a resource they can use for their own purposes (zero copyright risk AI input)

Also: Disney was already unpopular with social conservatives in the 1990s, but they've arguably grown even more anti-Disney in the years since, plus the post-Trump GOP finds itself far beholden to its base than the 1990s GOP did – nobody in the contemporary GOP wants to vote for anything viewed as doing Disney's bidding, because they probably won't be forgiven. In the 1990s, they could be confident they would be.


I hope that you're right. Even the current copyright duration is absolutely insane, and the world is losing trillions USD in progress/knowledge/opportunity just so Disney can sell Mickey f@¢#ing Mouse! Enough already!

I'm willing to give the 1998 legislators the benefit of doubt -- they were probably clueless when it comes to internet and technology. But extending copyright further now should be seen as a crime against humanity.


Steamboat Willie itself might enter the public domain, but good luck trying to use the specific rendition of Mickey Mouse. Disney's been using that rendition on stuff like t-shirts recently to effectively renew their IP rights, probably because the larger copyright is ending soon.


I remember when congress extended the copyright for Disney. The extension date seemed so far out as to be unacheivable…


A rendering of a scene definitely is a work of its own has its own copyright, which is held by Meta. The scene is also attributed in the github repo license[1]. So the only problem here is that the Khronos post is missing the attribution.


I wouldn't be so certain. Under US law that may be false due to it lacking originality. For example, a photograph of a public domain painting is itself considered public domain [1]. This is not the same in all countries though, e.g. not the UK [2]

[1] https://en.wikipedia.org/wiki/Bridgeman_Art_Library_v._Corel....

[2] https://en.wikipedia.org/wiki/National_Portrait_Gallery_and_...


I don't want to venture into the legal question, because I think it depends on the details of exactly what they changed compared to the original, and how much human creativity went into those changes, and I don't think we have those details.

But, most engineers/PMs/etc don't have a good understanding of copyright law – it wouldn't surprise me if the authors of that blog post just slapped "Copyright Meta" on it by default because they are used to doing that, and aren't thinking at all about technical legal questions of copyrightability. Furthermore, it isn't really their job to think about those questions – that's what companies employ lawyers for – and I imagine the lawyers likely think that asserting copyright over the uncopyrightable has minimal negative consequences, whereas failing to make that assertion can work against them if it ever becomes the basis of a lawsuit, so better just tell the employees to slap a copyright notice on everything.

I once contributed (on my employer's time) to a FAANG open source project (I'll avoid saying which project or FAANG because I don't want to publicly embarrass anybody). I added a brand new file which I'd written from scratch; I copied the copyright/license notice from one of the existing files to the new one, but I changed it from "Copyright [FAANG]" to "Copyright [MyEmployer]". The FAANG employee who ran the open source project objected to that – "why did you change the copyright, everything in this project is copyright by [FAANG]"– the project didn't have a CLA, by the way. I told them they were wrong about the law, and if they didn't believe me, ask their own lawyers – and maybe they did talk to them, because they dropped the objection and ended up merging it, complete with my employer's copyright notice. So even FAANG engineers can fail to grasp the basics of copyright law.


A photo of a public domain painting is only in the public domain if the photograph wasn't distinct enough from the photographed work. Rendering requires a number of "artistic" choices so I doubt that precedent would apply here.

Here is a better explanation from your Wikipedia link:

> Bridgeman Art Library v. Corel Corp. [...] which ruled that exact photographic copies of public domain images could not be protected by copyright in the United States because the copies lack originality.


> Rendering requires a number of “artistic” choices so I doubt that precedent would apply here.

I wouldn’t speculate on what might fly in court, but while some rendering can require artistic choices, it’s certainly not a requirement for all renderings, and more importantly, the specific renderings in question here are very low on the artistic choices scale; they’re generic screen-captures meant to demonstrate the library’s functionality, not carefully rendered imagery. The Bistro scene is instantly recognizable, and the view is generic and similar to many existing renderings, and lower quality than what you get if you web-search for “bistro scene render”.

I will speculate that it seems likely that Khronos slapped the copyright notice simply because they got the images from Meta, and that Meta made them of this scene specifically because the scene has an open license, and Meta had no particular intent to assert copyrights. I bet this is only a CYA by Khronos, and not even a question of precedent. That said, I guess maybe I think it lands closer to Bridgeman v Corel than you do.


What I meant by rendering is not scene staging: lights, camera position, etc. what I meant was that when you implement a rendering engine, you need to make artistic choices. Like when you implement a lightening system, there are a number of decisions you need to make about how it works. Each decision changes how the final lighting looks. That’s why you can pretty much tell when a video game was implemented using UE3: everything is super shiny.


This isn’t a rendering engine, it’s a library layer that just provides an interface to OpenGL, Vulkan, Metal, etc. The pics in the article look like OpenGL renders, and this library is not really making its own “artistic choices”.

Even if it was a rendering engine, having worked on rendering engines for both games and film, I’m unconvinced by your argument. In fact, the goal is typically to avoid baking artistic choices into the engine. The goal is usually to represent the choices made in the scene and the staging by the actual artists faithfully without bias. Sometimes there are some identifiable styles that emerge out of the technical limitations of an engine, or occasionally from unique technical features. It’d be a stretch to call those artistic choices. There can also be uniquely stylized engines that make unique artistic choices, and they’re pretty niche so I can’t even name one off the top of my head, but this library by Meta definitely isn’t one of those.

And again, if you look at the two specific images in question in the article, there really aren’t any particularly unique artistic choices there, neither in the rendering engine nor in the staging. They look like screenshots of an OpenGL render of the CC licensed Bistro, using a camera view and lighting that is similar to thousands of other shots of this scene.


For an intermediate library, it doesn't actually seem to be very much backend-independent. At least the desktop sample [1] has a disappointing amount of code paths with parallel implementations for OpenGL and Vulkan, switched at compile time using "#if USE_OPENGL_BACKEND".

I guess this means that this sample will use OpenGL rather than Metal on macOS? They claim there's a Metal backend, but how would one enable that for the "Tiny" sample? By manually adding a third parallel implementation with "#elif USE_METAL_BACKEND" everywhere?

[1] https://github.com/facebook/igl/blob/main/samples/desktop/Ti...


This looks like what happens when the "All abstractions are leaky" crowd go too far. It's an absence of abstraction to the point that the intermediary layer is not helpful enough, and will simply end up hidden behind another layer of obfuscatory gunk.

Graphics people need to face the fact that writing optimised cross platform renderers is not something that can be solved by divide/conquer into layers in this bottom up way anymore, instead you need to architect the data flow of the renderer and implement platform specific/optimal approaches for each sub part of that, which are so specific that this sort of wrapper would not help. This isn't exactly far removed from the pyramids -> gothic cathedral comparison.


I have been researching, and it is my current understanding that this is the approach that WebGPU takes, specifically wgpu.

Is that right? I have been thinking that wgpu is the best choice available for intermediate cross platform graphics one level below something like Skia and one level above Vulkan, OpenGL, DirectX, and Metal.


Reads to me like the usual collection of utility/glue that people build for their own needs? Those usually don't get open sourced, but it's also not wrong to do that. At least if you can resist the urge to announce it as the graphics API to end all graphics APIs...


> can resist the urge to announce it as the graphics API to end all graphics APIs

Please quote the text that made you believe this. It's a very negative take.


I did not intend to imply that they did. Sorry if that hasn't been clear. I was considering adding a few words in that direction but went with brevity.


You can build far better abstractions than what we have here. A lot of games that support more than two platforms have graphics abstractions that leak less than this.


Don’t you think this is already happening for most AAA production tech? Most high performance real-time stuff does go top-down. Mid-core stuff or art tooling might be fine with bgfx (or this I guess).


Those seem to be mainly platform-specific initialization things, like creating a window and rendering context. That's pretty normal for these types of rendering libraries which don't include a full blown portable windowing API.

If you use something like SDL, you'll probably be able to minimize the platform-specific stuff.


Even outside of platform-specific stuff, I'm seeing it all over the core rendering code. I'm not impressed. https://github.com/facebook/igl/blob/main/samples/desktop/Ti...


You're right, that's weird. The particular snippet you linked to isn't that egregious, it just creates a dummy 1x1 texture for some reason, and changes a hint. A lot of stuff in that file though is branched on that opengl flag. The `render` function is a huge mess, and it doesn't even seem like that demo supports Metal at all.

But tbf also, I'm just skimming the code base. Maybe they'll publish some better docs later that explain/justify these things.


It's a very weird one.

createRenderPipelines uses the branch to add #version 460 to the beginning of shaders (one would think the platform backend could do that for you, also this won't support GLES2 or GL3), and also makes the programmer build a sampler/uniformblock mapping table.

That's... perhaps needed if you're on GL3 because you don't have access to binding=N in the shading language and you don't want to do any shader parsing in the backend, but also, you're forcing it on GL 4.6, so... huh? Just use explicit binding in the shader and the binding index APIs.

The render function uses USE_OPENGL_BACKEND to adapt for the -1...1 clip space. Sure, again, glClipControl is more modern than your minspec, but you're already forcing GL 4.6, so WTF. Also, it's not hard to write device_->adjustProjectionMatrixForNativeClipSpace(); that does a matrix mul.

It also makes the shadow render target have a color attachment (wtf? depth-only targets are supported just fine in GLES2/GL3 to my knowledge), and it also... doesn't use an index buffer when rendering? (EDIT: This is because it's using a 32-bit index buffer, which GLES2 doesn't support. But it's a much better idea to split it into multiple 16-bit index buffer draws if required than drop the index buffer entirely... also, you know, shaders have 4.6). I give up trying to understand what's going on. Oh, and despite building the uniformblock mapping table from before, you still have to use glPipelineState->getUniformBlockBindingPoint? What on earth?

This does not impress me.


What really makes me sad is that there is enough of my brain space wasted on gpu esoterics that I can understand this comment.

GPU programming is insane and the devs have stockholm syndrome.


Ouch, it is really bad.

I only read the announcement by the time I submitted it, it is quite clunky.


I disagree. Most of what's platform specific is stuff like shaders, and host window stuff. I definitely do NOT want a graphics layer owning that - I may want to control the type of os window (say, for an audio plugin). I already know gl shaders, and metal is really similar, a new 'common denominator' shading language would just be another language - with less docs, stackoverflow posts, etc.

And on the other hand, you get abstractions for the stuff that really is similar, like command buffers, camera control etc.

I think the API is well designed, and nails flexibility together with performance, at the cost of requiring to have, at least, expert knowledge in one library. After that I would get chatgpt to convert my shaders to metal, Vulkan, etc.


I think the documentation could be a lot clearer about this. I can understand if the goal is for the abstraction to be thin and unopinionated, and that means you have to be an expert in each backend. But they don’t seem to explain this very well.


There is ios/Metal sample here that seems to support opengl: https://github.com/facebook/igl/blob/main/samples/ios/snapsh...

Thr shaders are backend specific but rest is mostly generic?


Same impression here. The triangle example isn't better than a raw Vulkan one. Then I thought maybe it'd have more value in larger applications, but the Bistro demo is just the same kind of leaky abstraction code all over the place. The GUI is also just ImGUI. I don't know what value this library provides.


I thought the very same when I looked at the Tiny sample, just a lot of #if conditional compilation.


They claim to support WebGL as a compile target, it seems like a pretty big missed opportunity that all of their demos don't link to in-browser WebGL running examples!


To be fair, we don't want WebGL anymore, we want WebGPU now =P


Speak for yourself. I'm preparing for WebGFX. Need to stay ahead of the curve.


WebRTX by Nvidia?


You joke but I wonder how big of an impact ray tracing in the browser would have, especially when the fidelity is increasing at such a massive rate. Some of the web demos I’ve seen, like the Unreal ones, are mindblowing.

(Also if that comment exposes any ignorance I have, please forgive me and point it out so I can learn something!)


I prefer the minimalist WebG approach.


I like HTML. Wait, what are we talking about?


Tangent you got me thinking about:

HTML clicked for me one day when I mentally decoupled the hypertext from the actual browser rendering. So many of us think HTML and imagine the point is to render a webpage. But HTML describes the semantics, topology, and content of a document. It’s 100% valid to “render” HTML in some other format like a PDF or an mp3.


I'm hoping we see a move to allow the rendering of the webpage to be entirely up to the users. Just provide the data, and let me decide how I want to interact with it. But that would ruin SEO and Ads, so we're gonna get in a buncha legal battles about web scrapers instead.


“Reader Mode” is a successful example. I’m actually shocked it exists because of how it impedes the things you mention.


But reader mode is mostly bunch of heuristics with tons of ad-hoc special cases and hacks instead of relying documents to be well-structured. So in many ways it is the opposite of successful example.

https://github.com/mozilla/readability/blob/main/Readability...


Oh true. Which kind of demonstrates the penalty for abusing HTML so much that it’s no-longer semantically reliable.


How long can it be called abuse if it is how html has been used like almost entirety of its lifetime.


By then AI will have disrupted the ad-revenue model so fingers crossed we get the clean data!


Is it kind of a compromise then to "tag" HTML with classes for CSS?

CSS doing the "rendering," like laying out mobile-responsive versus desktop.

I wonder how we would separate out explicit class names from HTML, unless the tags themselves are <custom-names />. (Micro frontends & web components?)

Then it sort of works out nicely, I think.


HTML is the semantics, CSS is the styling, but you need both. Which is why browsers come with default CSS (which you can unset) for everything. You get the element tag to say "what it is", and you get the CSS classes to say "what visual rules to apply".


This is mostly true, but the asterisks cause a little chaos.

> HTML is the semantics ... the element tag to say "what it is"

Maybe this is best framed as a perspective thing.

"Semantic HTML" is about HTML authors using HTML elements in a way that is consistent with the definitions laid out in the specs. These definitions try to specify element semantics because user agents want to be able to do less-dumb things (things that don't work as well if HTML authors are constantly abusing tags for some presentational effect even though the semantics are weird or wrong).

The main consequence of this is that tag semantics (from the UA's perspective) won't always square with what the author assumes it means unless they go study the spec. For example, it's probably not hard to go find cases where the <address> tag is used for the obvious thing from the author's perspective: marking up addresses. The spec, however, explicitly contradicts this surface-level reading: https://html.spec.whatwg.org/multipage/sections.html#the-add... (i.e., it can be "correct" for pages to contain a mix of addresses that do and don't have the address tag.)


We also have a lot of tooling that invites semantic abuse for presentational effect (i.e., using markdown blockquotes as notes, and even the fancy behavior browsers attach to the <details> element).


Your comment is funny in such that ; recall the web before CSS?

1990s web, with flashing tags and just infant monkeys trying to cobble together a webpage?

Your comment brings so many images to mind.


Now HTML¹ is an output target to Flash-like games, tunneled video chat, and the flashpoint of global communities versus corporate priorities.

We can still do flashing tags and cobble together webpages; all we need is a text editor.

That's one allure of programming: we can (re)invent primitives of everything, for better or worse.

¹ With CSS and Javascript


That web was so much easier to scrape, though.


It’s a purity question. You can assign any attributes you want to an element. And some of them are formalized in various ways.


WebGPU will take ages and ages to be fully usable across all platforms (eg old androids)


Who cares about old Androids? Facebook su-, sorry, Meta sure doesn't.


There’s no clear date when even iOS will get it. It took years and years to get wasm simd on mobile safari


Any game/3D developers here be interested in WebGPU/WebAssembly support for Unreal Engine 5? Along with an asynchronous asset loading system, for dynamic fetching of assets at runtime.


The asset stuff is the bottleneck here isnt it? We need browsers that support cacheing assets that are gigs in size


Asset delivery is one of the key aspects, no doubt. Keep in mind that today, browsers support up to 4GB at the moment - but that is a limitation that won't be an issue eventually, and when it's lifted it will allow whatever local storage the user's client hardware has to be leveraged. This would enable AA and AAA desktop/console games on the web.


Just wait 10 years. The time that took WebGL to be fully portable.


Taking your comment too seriously... Webgl is the more ubiquitous target, so I'd still prefer that when it's sufficient.


I'm thinking about renting a couple WebTPUs myself.


I guess website is still wip, the docs definitely seem pretty spartan atm


If you're looking for something like this, Sokol is a much simpler alternative:

https://github.com/floooh/sokol

It doesn't support vulkan though, but if that's important to you you're probably much better off just using vulkan directly since it's supported on all the major platforms.


Sokol also provides a solution for shader cross-compilation (https://github.com/floooh/sokol-tools/blob/master/docs/sokol...), so you only need to write your shaders once no matter if you're targeting OpenGL, Metal, or DirectX.

There are other tools you could use out there with IGL, but Sokol's solution streamlines the whole process.


What about

https://github.com/gfx-rs/wgpu

It is written in Rust


It's also got C bindings with wgpu-native. There's also other good alternatives like Diligent engine and bgfx.


It seems to not support as many back-ends as IGL, for example GL ES 2.0 and OpenGL 2.x.


wgpu is great and worth considering


Vulkan is not supported on iOS / MacOS, which is the major benefit to this release.


Khronos maintains MoltenVk though, which is as "official" as it gets: https://github.com/KhronosGroup/MoltenVK

...technically, Vulkan on Windows is also only supported via 3rd-parties (the GPU vendors), so the situation isn't actually all that different. The "Vulkan driver" is just bundled with the application on Mac instead of installed separately.


385 lines for a triangle: https://github.com/facebook/igl/blob/main/samples/desktop/Ti... When you are trying to sell a wrapper you want your hello world example as small as possible not as comprehensive as possible.


if you just want to draw a triangle, there's higher level libraries for that purpose. this is a low level library built to abstract (but map as close as possible to) modern backends (Vulkan, DX12, etc). the idea with these backends is to give precise control over the pipeline - that kind of precise control does not lend itself to the higher level abstractions you are looking for.

that said, it's not like this scales linearly so that 1,000 triangles is 385,000 lines of code. there's a lot of plumbing to setup the pipeline for your application's specific use case.

again, if your use case does not require the flexibility, look elsewhere.


Good to know I was under the impression that to draw 2 triangles I need to copy paste the whole code twice. I have been a graphics programmer for over 25 years. I'm not even commenting on the API. The toy example certainly can be reduced in code length even given the current API.


... for metal, gl, and Vulkan.

Sorry but this is simply par for the course for any of the above.

You can certainly wrap a lot of that stuff, but you need to make assumptions, and the person that uses likely is writing demanding app, and they want full control over literally everything - but they also would like to cut time to port to Linux by half (say).


The code looks fine to me and a lot is setting up glfw. It's a graphics API abstraction layer like bgfx / sokol / wgpu, not a rendering libraries that would give you stuff like drawTriangle()


ye, it's approaching 50% of LOD of a vulkan hello triangle


but its called tiny!


ImGui spotted, someone should tell ocornut to add it to the list!

https://github.com/ocornut/imgui/wiki/Software-using-dear-im...


I love the initial commit message: (っ˘▽˘)っ IGL ⊂(◕。◕⊂)


Another cheeky comment from Meta devs: The location for pytorch's git repo is listed as "where the eigens are valued" : ^)


That raises the question of where they are vectored, though!


What is it there to love? What is this garbage supposed to mean? I like descriptive commit messages instead. Smileys or emojis are childish.


Oh neat! I used this when I worked at Meta about a year ago. It didn't support Vulkan at the time, so it's great to see they added that. It's nice that IGL abstracts the CPU-side code, but you still end up writing shaders for each platform.


Having to do a lot of your own switching at the implementation level seems annoying, I'm surprised there isn't a more simple API on top of that to give you the correct impl directly.


Can someone explain the significance of this? I do WebGL development and I believe there is already an intermediate layer called Angle that WebGL compiles to. The Angle layer decouples the hardware from the software and allows hardware vendors to develop drivers that run Angle and shader languages to target Angle without the two needing to know anything about each other. (Not sure if that's right?)

This seems like another intermediate decoupling layer?


Angle is ultimately a OpenGL|ES implementation on top of other APIs since webgl is pretty close to GLES, but desktops don't typically implement GLES.

This (IGL) is more a layer sitting on top between your app and the system provides API since pretty much every system has a different blessed API these days: Browser:WebGL/WebGPU, Windows:DirectX/Vulkan, Mac:Metal, Linux/Android:Vulkan, Consoles: Proprietary APIs like NVN/GNM/AGC/DX12 with a lot of extensions.

Just about every major cross platform 3D graphics app/engine has a layer like IGL, this just seems to be an attempt to make Meta's a standard.


So then the full stack for using IGL in browser (on Windows at least) would be App Code -> IGL -> WebGL -> ANGLE -> DirectX -> Hardware device?

Owie


Technically there's also a driver layer sitting between the DirectX 'client-side' API and the hardware, so it's even worse ;)

(the whole point of more modern 3D APIs, which move most of the expensive "abstraction-layer translation work" into the initialization phase is to "cut through" all those layers in the frame loop though)


ANGLE is a portability layer for Windows which provides the OpenGL API on top of DirectX, since Windows clients aren't guaranteed to have good OpenGL drivers installed. WebGL is itself closely related to OpenGL so it makes sense to build it on top of that.


Interesting. Since it does not include windowing integration at all, it looks like you pretty much have to do the glue for each platform you support. This isn't too bad, but it could be better with adapters for common choices for windowing and context management (SDL, GLFW, etc.) Speaking of which, it seems the Linux path assumes X11 for now. Wonder if EGL/Wayland works at all at the moment, but I'm not at a desktop computer to give it a shot.

That said, all of this is relatively mundane, at the end of the day. I'm curious to try it out and see how the ergonomics/performance is. It honestly doesn't look too bad and it's kind of a good sign that a large amount of the triangle example is just windowing, because the actual rendering is relatively simple and succinct for a modern graphics API. I'd like to have an adapter for SDL2/3 and support for Wayland on Linux but otherwise it looks promising. Compared to other abstraction layers (like bgfx) it appears a bit more forward-thinking in some superficial regards at the very least. (Seeing a command queue abstraction in hello world makes me hopeful, anyways.)


It integrates quite easily with SFML. I took the Tiny example, removed the OpenGL branching and replaced the GLFW code with SFML.

260 LoC Vulkan / IGL / SFML example: https://github.com/eXpl0it3r/SFML-IGL


Any ideas why it excluded Direct3D as a target? It seems to be the only omission. Or is OpenGL/Vulkan support sufficient to cover Windows perhaps?


I'd say even Vulkan alone is sufficient to cover Windows.

The main benefit of DirectX 11 is that it's a lot simpler for the developer. But when you are creating an abstraction layer for multiple APIs, you are not the target audience for a simple to use graphics API.

IIRC Doom Eternal only supports Vulkan (on PC) and that didn't really cause problems. In fact, the game's performance was superb.


Probably. The OpenGL ES support also seems to use Google's Angle, which can target DirectX.


But if Angle on top of DirectX 11 is faster than Vulkan it's a highly unlikely bug in the Vulkan driver.


Because Microsoft isn't their friend obviously.


Is it ever possible in the future to have an actual uniform graphics API, instead of having to make complicated abstraction layers to make cross platform graphics? (I mean theoretically, will it be possible in say 10 thousand years)


OpenGL was that, in the early days. But GPUs started being able to do more things than OpenGL could talk about, and people wanted to use all the new features to make shinier things.

We almost had it, with Vulkan. But Apple just had to Think Different.[1] If it were not for Apple, we would not need this intermediate graphics layer. It wasn't a win for Apple; the Mac-only game market is tiny.

[1] https://en.wikipedia.org/wiki/Vulkan


Forgetting about Microsoft, Nintendo and Sony?

Also about OpenGL extension spaghetti making it literaly a bunch of mini-APIs that were only portable in name and basic features?

The iOS game market is huge by the way.


The Mac-only gaming market is tiny but the iOS market is huge.


I hope WebGPU is going to pull it off.


I understand why WebGPU defined a new API since it's primarily intended for web browsers, however, in this case why create a new API? Why not implement the OpenGL API? Essentially this could have been an OpenGL wrapper over the lower-level API's, e.g. over Vulkan, Metal, and Direct3D 12.

OpenGL has the advantage of being an open standard. Did Meta need custom behavior? If so, OpenGL already has a well-defined extension mechanism.


OpenGL is not the most straightforward, not the most compact, not the most ergonomic, not the most modern API. It's one of the most widespread though.

Meta produced this library not for the benefit of general public. They produced it to make their own development easier and faster, and internally they are unlikely to benefit from OpenGL's ubiquity or backwards compatibility. Then they released the library for the benefit of general public. This is very nice, thanks! But we are not the main target audience.


I know next to zero about rendering capabilities. But when I saw Meta | Release | Graphics Library. Given Facebook's history with data collection and the proprietary nature of their VR headset. I doubt they have any interest in open standards.


Facebook have released a bunch of good-quality open-source libraries and tools. Not just zstd, rocksdb, folly, flow, hhvm and other niche things, but also PyTorch and React, staples of two industries.

Since they are honestly open source, they have been inspected a lot, and, if they included any data siphoning, that would be long known, and long since deleted.

I don't share your skepticism.


Their technical achievements are not my domain. But I know behavior patterns, since Facebook's platform was built to harm peoples ability to choose for themselves, I consider it a blight on humanity. I doubt they have any interest in serving the common good. But I hear that you don't share my skepticism, time will tell.


I think for the same reason WebGPU did-there are tons of cruft in the opengl API and it doesn't really represent how the hardware works anymore.


Abstractions over the low level apis (opengl, directx, vulkan, metal) have gotten a bit popular in recent years. I think some driving forces behind it are inconsistent support of some apis on some platforms (i.e. opengl on macos) and maybe also a desire to support some of the newer apis (Vulkan or dx12) without going all in due to their complexity/verbosity.

I've been playing with wgpu-rs, bgfx, and a couple others in the past and they work pretty well for the most part. At this point I think I'd still rather choose just Vulkan for a new project though.


Having worked with OpenGL since around 1998 let me say: it's definitely the worst of all popular 3D APIs, and already has been since around the time D3D7 was released. It's time to let it go the way of the Dodo (just a shame that Vulkan didn't fix the worst OpenGL problem which is the vendor extension zoo).


Edit: I misread your question, so responded with a link to this article answering "why WebGPU": https://cohost.org/mcc/post/1406157-i-want-to-talk-about-web...

It does seem like kind of a mess. Unlike WebGPU, though, it doesn't sound like Meta's thing is much of an improvement.


Nobody wants to use OpenGL these days, it's a miserable API with 30 years of baggage and complex vendor-specific behavior


If Apple hadn't created Metal, the world would have been better off. Thank you, Meta, for your hard work.

Also, there are those who argue that starting new projects in C++ in 2023 is almost always wrong. How about checking out the new C++ library? Do you think your preferred language X would be better than C++ in this situation?


Metal is important because it demonstrates that a modern 3D APIs doesn't have to be a complicated mess.


I'm curious, how is metal simpler, and is it fair to say metal is simpler when it's only available for apple hardware?


TL;DR: Metal has a more balanced approach to API design than both Vulkan or D3D12. In general, "programming ergonomics" seems to have been an important design goal, which is something that seems completely absent both in D3D12 and Vulkan (basically, if the Vulkan or D3D12 design teams had to decide between making an API feature more convenient to use for the programmer, or more explicit at the cost of less convenience, then the answer was always "make it explicit", while the Metal design team at least seems to have thought about the consequences for the API user before making a decision - at least that's what it looks like from the outside looking at the resulting APIs).

Metal started with a programming model that looks a lot like what a hypothetical "D3D11 next" could have looked like (basically D3D11 minus the warts, and plus PSOs, command queues and render passes, but keeping the traditional and straightforward slot-based resource binding model).

Later versions then gradually added optional lower-level, more explicit features which allow more control over resource management and accessing specific GPU features with less API overhead, but may also be less convenient to use (and it's not actually just "Apple GPUs", Metal supports Apple devices with Intel, AMD and NVIDIA(?) GPUs just fine - since there were Mac laptops which shipped with those GPUs).

Apple also maintains higher level libraries like MetalKit and SceneKit.

Because of this 'layered approach', a Metal application can just start with the higher level API features to get something running quickly, and then gradually switch to more recent and more explicit Metal features only when needed or desired.

One could also simply say that the Metal team applied common sense, "taste" and a balanced approach to their API design, instead of just mechanically collecting hardware feature requirements from all GPU vendors and trying to cram those into a common low-level API at all cost.


If Apple hadn't created Metal we would have nothing to compare Vulkan with and to point and laugh at how badly Khronos did. (DX doesn't count, MS has no taste)


The top comment on that page sums it up:

DJTEK: The wording of this could be applied to virtually any library out there its so vague. The key features section is like a mixtape of the worlds graphic library descriptions greatest hits. Lol


trying to build this rn, and the download scripts has already pulled like a gigabyte of dependencies wth

edit: it's already like 2 gb


You mean the textures and meshes it downloads?


Darn, I opened the link and saw it was using Python. Sadly that seems to just be for the install process, for whatever reason.

Does anyone have any recommendations for an intermediate graphics library that uses Python and supports compiling your code to a WebGL target?

I'm interested in exploring SDF functions in a parametric CAD context, but coming from a Mechanical Engineering background. Not having to learn a new programming language for a side curiosity would be ideal.


Python has wgpu wrappers. Why do you want to use Python though for such a heavy graphics program?


Purely because I already know it well enough to be semi-dangerous. My day job is designing physical goods, I don't have the time to learn anything new just to satisfy my curiosity with signed distance fields.


I don't think you realize just how horrendously bad Python would be for this application. The global interpreter lock and marshaling between Python objects and GPU state would absolutely kill any kind of performance.

Pick up Rust and Bevy. It should be pretty easy to mock up what you want, and you can dip into wgpu when you need extra fine control over the GPU.


You probably have already seen this, but I would take a look at libfive for inspiration

https://libfive.com/


Not trying to shit on anyone, but that screenshot reminds me of 1990s/early 2000's games. I'm sure there has been a ton of work that went into it, but having some better textures or highlighting the usability would go a long way.

Just based on the screenshot, I am not sure that I would even bother to dig too deeply into the library.


"Just based on the screenshot, I am not sure that I would even bother to dig too deeply into the library."

Adding any PBR materials as samples would have been the wrong choice, since those are not hardware or graphics api dependent, and are always for the implementor to implment by themselves.

You don't want a graphics api to look nice at this abstraction level.

You get access to device resources, shader API etc.

Once you get triangles in, it's up to you to make it nice using the shaders you write - materials and GI model of your choice.


> Adding any PBR materials as samples would have been the wrong choice, since those are not hardware or graphics api dependent, and are always for the implementor to implment by themselves.

That "bistro" image is all PBR materials, represented in glTF. It's supposed to look the same for all standards-compliant glTF renderers, and it pretty much does. I posted the same scene in another renderer above. It's a brightly sunlit scene with no environment shaders, so it looks rather blah. glTF and Vulkan can do more than that, but this is all the test example asked for.



I don’t see the API claiming to be a standards compliant GLTF renderer, and I would be very confused if it did claim something like that as it’s feature. That said a ’render GLTF PBR’ sample would not be a bad thing to show the authors intent how to organize things etc.


Now look at real 1990s/early 2000’s screenshots. You memory is deceiving you.

This screenshot is far from current AAA games, but there is no way to render such a scene in a game made for 2000 hardware.


It looks about on brand with the static renders used in a lot of Final Fantasy games of the PS1 era, albeit with higher pixel density of course, so I can see the resemblance. That said, this is definitely doing it in engine, so while I can see how the GP's memory palace built that memory, you're definitely accurate that no game from that era was doing graphics like this in-engine.


Half life 2 had better looking cityscapes, so early 2000s is accurate.


Half Life 2: Lost coast from 2005 might be a fair comparison for this droll hypothetical

https://youtu.be/j-Iykz0gb7Q (video uploaded 2006)


Pre-rendered their were. Vaguely reminds me of Myst or FF7 quality.


For those reading who may not remember or have played the original FF7, here are links to pre-rendered backgrounds in FF7:

https://www.jmeiners.com/pre-rendered-backgrounds/img/ff7.jp...

and the image in the article about Meta's library:

https://www.khronos.org/assets/uploads/blogs/2023-july-blog-...


Not apples-to-apples because FF7 was NTSC resolution and heavily compressed.

If FF7 were rendered at the same resolution, I think it would be comparable. Here's an FF7 AI upscale mod for reference:

https://www.resetera.com/threads/a-full-high-res-ai-upscale-...


Pre-rendered? Sure. In-game? No. I think the art direction is half of the problem here. You can place lighting much better and the textures are lacking.


there is no "art direction" by Meta or the people working on this project; this is a reference scene for PBR pipelines.

https://developer.nvidia.com/orca/amazon-lumberyard-bistro


You are right, the lighting makes the original scene look worse.


Unreal and Quake engines handled light and textures far better than that.


the screenshot could/should probably be better, but that doesn't mean the library is incapable of producing higher quality renders. I haven't dug in, but I am assuming this is basically Meta's WGPU. if so, these sorts of libraries are low level libraries abstracting different platforms that can be used to build high quality render pipelines on top of that can run anywhere. you could build an N64 quality rendering pipeline with little effort, or something rivaling AAA studios with a lot more knowledge and effort.

I guess, to make a poor analogy, your comment is sort of like looking at a still frame of a poorly shot movie and complaining that the codec is shit.


It does seem to be Meta's answer to WGPU.

The picture looks like they didn't have automatic tonemapping, the rendering equivalent of auto exposure control. So the picture is too dim. I brought it into a photo editor, saw that the top third of the intensity space was empty, used "Levels", and it looked much better.

That's a standard glTF test scene, called "bistro". Here's the same scene, rendered with Rend3/WGPU.[1] Here's the source code for that example.[2] Rend3 is a level above WGPU; it deals with memory management and synchronization, so you just create objects, materials, transforms, and textures, then let the renderer do its thing. Rust handles the object management via RAII - delete the object, and it drops out of the scene.

Looking at Meta's examples, there are too many platform-specific #ifdef lines. More than you need with WGPU. Probably because WGPU is usually used with something like Winit, which abstracts over different window systems.

We'll have to wait for user reports about performance. Meta didn't show any video. Here's a test video of mine using Rend3/WGPU on a town scene comparable to the "bistro" demo.[3] This is a speed run, to test dynamic texture loading and unloading while rendering. The WGPU people are still working through lock conflicts in that area. The idea with Vulkan land is that you should be able to load content while rendering is in progress. For that to be useful, all the layers above Vulkan also have to have their locking problems hammered out. Most open source game engines don't do that yet. Unreal Engine and Unity do, which is why you pay for them for your AAA title.

[1] https://raw.githubusercontent.com/BVE-Reborn/rend3/trunk/exa...

[2] https://github.com/BVE-Reborn/rend3/blob/trunk/examples/scen...

[3] https://video.hardlimit.com/w/sFPkECUxRUSxbKXRkCmjJK


Why does Meta need an answer to WGPU? How does the existence of WGPU create problems for them, and how does this new thing solve problems that anyone else has with WGPU?

This just feels like sour grapes about the fact that WGPU excluded Khronos when it was developed, so Khronos wants their own, with maybe a bit of promotion-driven development on Meta's part.


hi John! you know a lot more about this stuff than I do. is it possible they just haven't implemented a full PBR pipeline for this demo/screenshot, or do you think this (the differences in the two screenshots) is more an indication of what would likely be areas for future development?


They seem to have implemented everything that the "bistro" scene calls for. I don't know if those hanging colored lights emit light, though. Rend3/WGPU doesn't handle large numbers of light sources yet. But you wouldn't see them in daylight anyway, because this is high dynamic range rendering, and, as in real life, those light are dim relative to the sun.

Here's the same scene in Godot.[1] This was modified a bit, and has accurate values for the lamp illumination. So they are totally washed out by the sun.

And here it is in several other renderers, with a video.[2]

The original scene was in .fbx, from Amazon's "Lumberyard" project. [3] That project started as the Crysis engine, was bought by Amazon, spun off as open source, was renamed Open 3D Engine, and is still getting Github changes, so it's not dead.

There are many open source game engines. Most of them get stuck at "mostly works, not ready for prime time". That's where the problems get hard and fixing them stops being fun.

[1] https://github.com/godotengine/godot/issues/74965

[2] https://www.ronenbekerman.com/orca-amazon-lumberyard-bistro/...

[3] https://developer.nvidia.com/orca/amazon-lumberyard-bistro

[4] https://en.wikipedia.org/wiki/Amazon_Lumberyard


The problem is, it implies that the library isn't capable of higher quality. I could imagine maybe because it is a lowest common denominator. Or because the Metaverse is not focussing on high end graphics, as their previous releases looked poor. Or maybe this is something like VML. Designed to be fairly barebones and people would use it for museum websites and educational tools but not for graphically intense games.


I always see this idea on HN that a library, website, framework should be marketing itself for mass appeal and adoption.

this is made for people building rendering engines on top of. if you are the software engineer with the knowledge necessary to do that, the screenshot is probably not going to influence you, because you understand what this is for. if you aren't, why should they be marketing to you with eye candy?


For better or worse, these graphics API shots are not the place to show cool shaders. SIGGRAPH papers are usually the same dry, boring test scenes. It's just the culture of it.


The textures are ok, but the lighting is super flat. People are used to games making at least some attempt at global illumination, whether it's prebaked light maps, or faking it with SSAO, or anything to not have surfaces be a totally consistent brightness across the whole thing.


Your rendering API is not going to implement GI for you, and having it in a sample app is kind of misleading, that's not really the point. It's probably a mistake to include that as a sample scene as it creates the impression that it's trying to be a game engine. A few material spheres and test meshes would probably be a better example.


Agreed, just pointing out why parent commenter gets the “1990s/early 2000's game” impression from the screenshot


Whether you like the textures has nothing to do with whether the graphics library is any good.


This is a great example of a principle I heard from CoderFoundry: "People are visual buyers. If it looks good, people assume the code is good."


Sure but it has everything to do with whether I can easily tell that it's good.


I can slap together a few high-res textures in SDL_Renderer, and maybe hack in some pretty shaders. Doesn't mean it's a good API.


The original Vulkan Demos were butt ugly too. Then 2 or 3 years later that super flashy DOOM remake went all-in on it and shut everyone up for a while.


Counter Strike 1.6 energy


Let’s go go go!


I like it. It manages to be on the healthy side of the uncanny valley, so as to feel more like an actual inhabitable world instead of a disconcerting knock-off of the real world.


The Utah teapot is never around when you need it.


You are too kind. Mid to late 90s game. What is up with those shadows. I have written a 3D-engine with better image quality than this like 20 years ago.

At least it is not a teapot.


> I have written a 3D-engine with better image quality than this like 20 years ago.

Well good news, it's not a 3d engine at all! It's a nice common API to cover all the existing graphics APIs.


A large part of creating a 3D-engine is to try figuring out what capabilities can be used with what performance across different hardware. If this is only an abstraction it won't solve anything.


It's not texture, it's the lighting, it makes it feel very flat, especially when contrasting with RTX'd stuff which we all have in memory to some degree.


Lmao, people are absolutely oblivious to the effects of lighting and it shows.

Go watch nvidia’s demo of their lighting and scene modification/remastering tool.


It's a nice project, but isn't the majority of the problem it "resolves" already solved by Vulkan? I was under an impression Vulkan runs on everything. However I only used Vulkan (with ncnn) for AI. I don't know how tedious it might be to use it for graphics directly.


> I was under an impression Vulkan runs on everything.

Sadly, that's not the case. For game consoles, neither Xbox nor PlayStation support Vulkan. iOS and macOS have their own Metal API (because of course they got to have their own thing).

So, with just a Vulkan backend, you can only target Windows, Linux, Android, and Nintendo Switch natively. Translation layers like MoltenVK may help though.

It's still unclear whether the next iteration of the Switch will continue to support Vulkan.


Switch currently supports Vulkan but could drop it?


The Switch supports Vulkan, but it's also quite old at this point. Nintendo will probably release a new console in the near future and we simply don't know whether that one will support Vulkan or not.


It doesn’t seem to have Metal support unfortunately. Otherwise looks nice, could be fun for small pet projects.


Says it supports Metal 2+

https://github.com/facebook/igl



Similar to https://dawn.googlesource.com/dawn in the sense that both are solving the cross platform problem. But Facebook is solving it at a higher level of abstraction


Any reason this is better over BGFX?


What sort of Docker support will this have? I have a very shaky setup right now to run Unity headless that requires OpenGL and VirtualGL. It feels like that pipeline won’t work forever and am looking for alternatives.


This wouldn’t help you since Unity wouldn’t target it.

Unless you were to switch to a homegrown solution that would make use of this


Just noticed the installation command "python3 deploy_content.py" installs the scene and not the library. Don't run that one if you don't want do download 2G of stuff, just run the deploy_deps one.


I hope this is a easy to use high level library that wraps Vulkan apis. That would help solve one big complaint that Vulkan is too low level for normal programmers.


How does this compare to Google's filament? [0]

[0]: https://github.com/google/filament


Filement is full blown renderer (with shadows, advanced materials, effects like transparency, postprocessing) with platform abstractions. IGL has only the platform abstractions and the rest is left as an exercise for the reader.


Holy fuck that triangle renderer hello world example lol


I think I'll take my graphics libraries from companies that have actually done something with them and know what they should look like.


I think this may be useful, but part of me immediately thinks of this XKCD about standards: https://xkcd.com/927/


What’s the best way to get handle on modern graphics programming, particularly WebGL/WebGPU?


Use a game engine that offers you a scene graph, and don't deal with the draw level directly.


who would benefit from using this ? People building 3D engines such as unity or unreal ?


Thnx Meta! This is so cool that it's like a high-level Khronos release. #1 priority is I believe getting USD / Hydra rendering to WASM targets for browser use. But in the meantime, ray tracing doom ii summer of fun! ;)

Also, TIL the Meta Quest 2 is running Android 12L??


Is this a 1.0 release? Can we expect API stability?


what does intermediate graphics library mean?


so is this C++ only, no C?


Vulkan is supported.


No mention of Metal


Supported platforms from the repo README:

  - Metal 2+
  - OpenGL 2.x (requires GL_ARB_framebuffer_object)
  - OpenGL 3.1+
  - OpenGL ES 2.0+
  - Vulkan 1.1 (requires VK_KHR_buffer_device_address and VK_EXT_descriptor_indexing)
  - WebGL 2.0


But not in the linked article.

“It supports various graphics APIs, such as OpenGL®, OpenGL ES™, WebGL™, and Vulkan®”

Seems strange that’s missing from one and showing in the other.


Metal is supported, it’s mentioned in the GitHub readme.

https://github.com/facebook/igl


It's a Khronos press release so not carrying water for the opposing proprietary closed system makes sense. I wouldn't mention that other thing either.


"IGL is designed to support multiple backends implemented on top of various graphics APIs (e.g. OpenGL, Metal and Vulkan) with a common interface."


IGL does support Metal, according to the README which is linked at one point.


It supports Metal. In fact Metal is probably one of the reasons they built an adapter like this in the first place since GL became less of a supported thing on Darwin.


Thanks but no thanks Meta.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: