> The webgpu and webgl apis are pretty different so im not sure you can call it “technically the same code”.
Isn't Bevy using WGPU under the hood, and then they just compile with it both WebGL and WebGPU? That should be the same code Bevy-wise, and any overhead or difference should be caused by either the WGPU "compiler" or the browser's WebGPU.
Yes but also no. WebGL lacks compute shaders and storage buffers, and so has a different path on WebGL than WebGPU. A lot of the code is shared, but a lot is also unique per platform.
---
This is also as good a place as any, so I'll just add that doing 1:1 graphics comparisons is really, _really_ hard. OS, GPU driver, API, rendering structure, GPU platform, etc all lead to vastly different performance outcomes.
One example is that something might run at e.g. 100 FPS with a few objects, but 10 FPS with more than a thousand objects. A different renderer might run at 70 FPS with a few objects, but also 60 FPS with a few thousand objects.
Or, it might run well on RDNA2/Turing+ GPUs, but terribly on GCN/Pascal or older GPUs.
Or, maybe wgpu has a bug with the swapchain presentation setup or barrier recording on Vulkan, and you'll get much different results than the DirectX12 backend on AMD GPUs until it's fixed, but Nvidia is fine because the drivers are more permissive about bugs.
I don't trust most verbal comparisons between renderers. The only real way is to see if an engine is able to meet your FPS and quality requirements on X platforms out of the box or with Y amount of effort, and if not, run it through a profiler and see where the bottleneck is.
Isn't Bevy using WGPU under the hood, and then they just compile with it both WebGL and WebGPU? That should be the same code Bevy-wise, and any overhead or difference should be caused by either the WGPU "compiler" or the browser's WebGPU.