Hacker News new | past | comments | ask | show | jobs | submit login

A couple of years back I also wrote a pure WebGL renderer for Neverball levels [1]. No physics, no translation to WebGL, just the 3D renderer part in direct WebGL. It also has a surprisingly low performance ceiling. I'd say I managed even worse than what the gl4es translation layer does for Neverball. By far the biggest performance boosts were instanced arrays and vertex array objects - basically just reducing the number of calls into WebGL. Seems to me that WebGL has a lot of overhead with or without a translation layer.

[1] https://github.com/parasti/parabola




I did some GL benchmarking for simple draw calls (on desktop) a couple of years ago, and while it's true that WebGL came out lowest, the difference between native drivers were much bigger. I basically checked how many trivial draw calls (16 bytes uniform update + draw call) were needed on Windows before the frame rate drops below 60 fps, and for WebGL in Chrome this was around 5k, Intel native was around 13k, and NVIDIA topped out at arond 150k (not all that surprising, since NVIDIA has traditionally the best GL drivers).

It is definitely true though that you need to employ old-school batching tricks to get any sort of performance out of WebGL, but that's also not surprising, because WebGL is only a GL by name, internally it works entirely different (for instance on Windows, WebGL implementations are backed by D3D).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: