> WebGL was getting really old by now. I do wonder whether WebGPU is a bit late too though (e.g. right now Vulkan decides that PSOs maybe are not a great idea lol)
> As in, WebGPU is very much a "modern graphics API design" as it was 8 years ago by now. Better late than never, but... What's "modern" now seems to be moving towards like: bindless everything (like 3rd iteration of what "bindless" means), mesh shaders, raytracing, flexible pipeline state. All of which are not in WebGPU.
I'm not that versed on details, but would interesting to hear what are the advantages of this modern bindless way of doing things.
Aras is right, but the elephant in the room is still shitty mobile GPUs.
Most of those new and fancy techniques don't work on mobile GPUs, and probably won't for the foreseeable future (Vulkan should actually have been two APIs: one for desktop GPUs, and one for mobile GPUs - and those new extensions are doing exactly that - splitting Vulkan into two more or less separate APIs, one that sucks (for mobile GPUs) and one that's pretty decent (but only works on desktop GPUs).
WebGPU cannot afford such a split. It must work equally well on desktop and mobile from the same code base (with mobile being actually much more important than desktop).
I think it unrealistic management of expectations that desktop and mobile must or should be equal. There is plenty of web applications use cases one would like to run on a desktop, but they are irrelevant for mobile, for many other reasons as well. E.g. think editing spreadsheets.
WebGPU says the baseline should be what is supported on both desktop+mobile, and that extensions (in the future) should enable the desktop-only use cases.
Others seemingly argue that mobile should be ignored entirely, that WebGPU shouldn't work there, or that it should only work on bleeding-edge mobile hardware.
This is an odd analogy. We should reduce the API space for mobile so devs don't make mobile spreadsheets? I mean...what is this arguing exactly? UX is different, sure, but how does that translate into something this low level?
Can you explain what the split is supposed to be? I'm fairly confused because mobile GPUs (tile based) are creeping into the desktop space. The Apple Silicon macs are closer to tile based mobile GPUs than traditional cards.
What APIs are supposed to be separate, why, and what side of the fence is the M1 supposed to land on?
In places where Vulkan feels unnecessarily restrictive, the reason is mostly some specific mobile GPU vendor which has some random restrictions baked into their hardware architecture.
AFAIK it's mostly not about tiled renderers but about resource binding and shader compilation (e.g. shader compilation may produce different outputs based on some render states, and the details differ between GPU vendors, or bound resources may have all sorts of restrictions, like alignment, max size or how shader code can access them).
Apple's mobile GPUs are pretty much top of the crop and mostly don't suffer from those restrictions (and any remaining restrictions are part of the Metal programming model anyway, but even on Metal there are quite a few differences between iOS and macOS, which even carried over to ARM Macs - although I don't know if these are just backward compatibility requirements to make code written for Intel Macs also work on ARM Macs).
It's mostly on Android where all the problems lurk though.
Ah ok, so its not so much the mobile architecture as the realities of embedded GPUs and unchanging drivers compared to more uniform nVidia/AMD desktop drivers.
This is a real problem but I'm not sure splitting the API is a solution. If a cheap mobile GPU has broken functionality or misreports capabilities, I'm not sure the API can really protect you.
Uh, no ; it power and heat management so battery and fire risk that limits SFF -- It would be good for mobile devices to have external GPU/battery attachments via a universal connector... this will boost efficacy of devices... but you may not always need the boost provided by the umbilical - but when you do need it - just put it outside the machine, and connect it when needed...
> with mobile being actually much more important than desktop
How so?
I always thought the more common use case for GPU acceleration on the web for mobile were 2D games (Candy crush etc). Even on low end devices these are already plenty fast with something like Pixi, no?
We live in a bubble where we don't notice it, but desktop as a platform is... not dying exactly, but maybe returning to 90s levels of popularity. Common enough, but something tech-minded people use, and not necessarily for everybody. Mobile is rapidly becoming the ubiquitous computing paradigm we all thought desktop computers would be. In that world, WebGPU is much more important on mobile than on desktop.
I genuinely think personal computing has been severely hamstrung over the past decade+ due to the race to be all-encompassing. Not everything has to be for everyone. It's ok to focus on tools that only appeal to other people in tech. It really is.
A chromebook, internally, is more a mobile device than a "real" computer. Plenty of high school kids today will own their first real computer when they go to college. Until then, most of their computing is done their iPhone or iPad, and perhaps their school-issued chromebook.
We see this issue with kids of their generation entering the workforce with a lack of basic computer skills, or CS students in college who have to be explained the concept of a hierarchical file/directory structure.
> A chromebook, internally, is more a mobile device than a "real" computer
How is that? And if so how am I typing this on an Intel i5 Chromebook with 16G RAM that is hosting a Linux VM? If upgradeability is the issue, Framework's Chromebook is completely upgradeable.
In general, WebGL has more CPU overhead under the hood than WebGPU, so the same rendering workload may be more energy efficient when implemented with WebGPU, even if the GPU is essentially doing the same work.
In bindless (pointers) you say "at this GPU memory location I have a texture with this params".
In non-bindless you say "API create a texture with these params and give me a handle I will later use to access it".
Bindless gives you more flexibility, but it's also harder to use since it's now your responsability to make sure those pointers point at the right stuff.
It's a bit more complex than that. In classical OpenGL (and thus WebGL) "bindless" is more significant: You had to bind resources to numbered stages like TEXTURE2 in order to render, so every object with a unique texture required you to make a bunch of API calls to switch the textures around. People rightly rejected that, which led to bindless rendering in OpenGL. Even then however you still had to create textures, the distinction is that you no longer had to make a billion API calls per object in order to bind them.
Critically however, things like vertex buffers and fragment/vertex shaders are also device state in OpenGL, and bindless textures don't fix that. A fully bindless model would allow you to simply hand the driver a bundle of handles like 'please render vertices from these two vertex buffers, using this shader, and these uniforms+textures' - whether or not you have to allocate texture handles first or can provide the GPU raw texture data is a separate question.
How badly can you wreck state in bindless? Badly enough to see the pointers of another process or detect a lot of high-detail information on what computer is running the program?
If so, that'd be a non-starter for a web API. Web APIs have to be, first and foremost, secure and protect the user's anonymity.
“The web” should not first and foremost protect anonymity. It should do what humans need it to do ideally while keeping users private and secure. If there’s a concern, my browser should ask me if I’m willing to share potentially sensitive information with a product or service. I fucking hate this weird angsty idea that the web is only designed for anonymous blobs and trolls.
Letting advertisers identify you through some web accessible GPUs interface so they can track your every move and sell the data to all comers … won’t help you fight anonymous online trolls.
All of this is in the context of a browser. If a misbehaving web app uses pointers for memory from another process, that should be blocked by all of the same things that prevent non-privileged apps from doing the same thing.
Yeah, WebGPU unfortunately ended up becoming an outdated mobile phone graphics API on arrival. Still better than WebGL, but not quite what I would have liked it to be.
> WebGL was getting really old by now. I do wonder whether WebGPU is a bit late too though (e.g. right now Vulkan decides that PSOs maybe are not a great idea lol)
> As in, WebGPU is very much a "modern graphics API design" as it was 8 years ago by now. Better late than never, but... What's "modern" now seems to be moving towards like: bindless everything (like 3rd iteration of what "bindless" means), mesh shaders, raytracing, flexible pipeline state. All of which are not in WebGPU.
I'm not that versed on details, but would interesting to hear what are the advantages of this modern bindless way of doing things.
[1]: https://mastodon.gamedev.place/@aras/110151390138920647