Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand why this is a priority when WebGL is still so rough. Maybe we wouldn't need a new API for performance if WebGL worked better. There seems to be lots of room for improvement. My WebGL programs were much slower and were harder to write than the native versions of the same programs.

We should also probably sort out the native low-level APIs before setting the standard for the web, because otherwise we're building on top of a big mess. Though, my impression is that the WebGPU initiative is basically just another battleground in that struggle. I don't have any faith that this is being done for the good of users. It's just strategic ground to capture.



Because WebGL is an evolutionary dead end, for a variety of reasons. The initial idea was to track OpenGL ES, but that isn't really true anymore. Because of Windows, WebGL has to stick to a subset that can be easily translated to Direct3D. Because of GPU process sandboxing, anything that flows back from GPU to CPU is a huge synchronization problem. On top of that, mobile GPU drivers continue to suck badly, which means most of the extensions that were supposed to expand WebGL's scope into this decade are still out of reach, with 50% or less support in practice.

On the flipside, the native low-level APIs have all diverged. Vulkan, D3D12 and Metal each made different decisions, so it's pretty much inevitable that a 4th standard will have to be created to unify them. It will probably be higher level than any of the 3, and it will still be subject to strong sandboxing limitations.

Personally I think the big issue is that people stare themselves blind on the traditional graphics pipeline. Modern renderers have evolved past this, with various compute-driven and/or tiled approaches common place now. They're a nightmare to implement against current APIs, because a small change in strategy requires a rewrite of much of the orchestration code. The job of figuring out how to map your desired pipeline onto the hardware's capabilities should be the job of a compiler, but instead people still do it by hand. Plus, for GPU compute-driven graphics to be useful for interactive purposes beyond just looking pretty (i.e. actual direct manipulation), you absolutely need to be able to read back structured data efficiently. It's not just about RGB pixels.

There's an immense amount of potential locked inside, but the programming model is a decade or two out of date. Only AAA game companies and the vendors themselves have the resources to do novel work under these constraints. Everyone else has to throw together the scraps. Even the various attempts at LISPy GPU composition fall short, because they don't attempt to transcend the existing pipeline.


I would like to hear more about this. I was quite surprised how difficult things were when I started dabbling in Opengl and I thought that there has to be a better way. I know that there are libraries that build on top of Opengl and the like, but then always its a sacrifice of the power that you could have. It seems weird to me that it is so difficult because conceptually it seems to me that the model could be closer to the CPU/Memory model that everyone is already familiar with. You just have some RAM and some processor(s) that are going to do some computations right? Although I guess what really makes it a mess is that there needs to be a connection between what the GPU and the CPU are doing. I don't know, I was a bit surprised by how difficult it was. Perhaps I just don't understand it well enough.


To attempt to explain (desktop) GPU architecture: you don't just have memory and a bunch of individual cores on a GPU like you would on a CPU. You've got memory, texture sampling units, various other fetch units, fixed-function blending/output units, raster units, a dispatcher and then a ton of processing elements. These are all things the programmer need to set up (through the graphics API). Each of those processing elements runs several warps (wavefronts in AMD terminology), each which contains 32 or 64 threads (vendor-dependent), that all have their own set of registers. The warp holds the actual instruction stream and can issue operations that occur on all or some of those threads. Branching is possible, but pretty limited unless it's the same for every invocation. So the programming styles/models are incompatible from the start.

Then the real problem is, since all shader invocations share those fixed-function units, if you need to reconfigure them to use a different set of textures, buffers, shaders, etc you have to bring the whole operation to a complete halt, reconfigure it and restart it. And, contrary to popular belief, GPUs are the exact opposite of fast - each shader invocation takes an enormous amount of time to run, which is traded for throughput. Stopping that thing means having to wait for the last pieces of work to trickle through (and then when starting back up, you have to wait for enough work to be pushed through that all the hardware can be used efficiently), which means a lot of time doing little work.

So if you're trying to deal with the above, any notion of keeping things separate and clean (in terms of what the hardware sees, anyways) immedietely goes out the window. That's why things like virtual texturing exist - to let you more or less pack every single texture you need into a single gargantuan texture and draw as much as possible using some God-shader (and also because heavy reliance on textures tends to work well on consoles). Then you also have to manage to make good use of those fixed-function units (which is where tiled rasterizers on mobile GPUs can become a problem), but that's a relatively separate thing.

Also: transfering data back and forth in itself isn't necessarily that bad in my experience (just finnicky), it's usually the delays and synchronization that gets you.


I agree, yet I still don't understand how WebGPU is going to get buy-in from Apple and Microsoft if previous attempts at defining cross-platform APIs could not. Any web API will be open and cross-platform. If WebGPU is well-designed, it could very well be adopted as the next OpenGL.

I would rather target WebGPU and write my program once rather than implement my logic three times in Metal, DirectX and Vulkan. But, Apple and Microsoft don't want me doing that, so why would they support WebGPU?


These discussions always forget that Sony and Nintendo are also interested parties in what concerns 3D APIs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: