I wrote an implementation based on this a few weeks back. It works really nicely and I can render some 2M line segments at 60fps.
One thing you should do is try to limit very short and small segments, as GPUs don’t like rendering 1px triangles.
Still, I’ve been lead to understand that the instanced rendering done in this article is suboptimal on most GPUs. A better way to do it is apparently to put the points into a (non-vertex) buffer and do (batched) drawcalls for large numbers of vertices with a kind of manual instancing. Basically looking up what amounts to instance data using gl_VertexInput/6 and vertex data using gl_VertexIndex%6 (or 4 instead of 6 for indexes rendering).
Unfortunately I haven’t had the time to implement and test this yet.
If like me you’re also rendering lines with perspective, you want to also look at Freya Holmer's work. Particularly line thinness fading is important to reduce shimmer/aliasing. This means basically keep the width at a minimum of 1px and adjusting opacity below that instead.
> Still, I’ve been lead to understand that the instanced rendering done in this article is suboptimal on most GPUs. A better way to do it is apparently to put the points into a (non-vertex) buffer and do (batched) drawcalls for large numbers of vertices with a kind of manual instancing. Basically looking up what amounts to instance data using gl_VertexInput/6 and vertex data using gl_VertexIndex%6 (or 4 instead of 6 for indexes rendering).
Note that you can't do this in WebGL, at least WebGL 1.0. (Also, be careful about assuming things apply to “most” GPUs!)
I don't mean to be snarky, but WebGL 1.0 is 10 years old as of March. So yeah, if you need to support that there's a lot of extra work you need to do, for all kinds of things.
One advantage of rendering your own lines is that it allows you to create cool effects.
For instance, if you change the opacity and width of lines depending on how far away they are from a given plane, you can simulate a depth of field effect that runs even on low-end devices:
Rendering "just some" lines is easy. Rendering nice lines is hard. OpenGL is a pretty low level abstraction, don't expect to get the same kind of stuff for free as you do in Canvas or the DOM.
I don't get why the higher level frameworks can't make it easier though. I've used three.js before (without knowing any webgl) and was burned by the fact that lines can't even display consistently across devices. For example the line widths would be ignored in some cases but not others: https://threejs.org/docs/?q=line#api/en/materials/LineBasicM...
Because this is largely outside the scope of most 3D game engines. Rendering crisp 2D lines (or any anti-aliased complex vector polygon) on the GPU is hard: it involves a lot of high level decisions about how to process, structure & submit the data to the graphics pipeline. Depending on the application, one approach may be vastly superior to the other.
I do think ThreeJS should include basic thick screen space line support, though, as it covers most use cases and doesn’t add a lot of bloat/complexity to the core framework.
True. It's part of their examples, but would be worth having in core I feel, as most people are using THREE.Line for their apps/designs which is unreliable.
I didn't actually used three.js, just studied the implementation a bit. It creates triangle geometry so I would expect it to work across all supported devices. The 10px is just the limit of demo slider.
I think it's because nice lines aren't a super big use case for WebGL. Most people use it for 3d scenes, 3d objects, etc.
In other words, I suppose that three.js doesn't have nice lines because nobody contributed it, which is usually a sign of low-ish demand. (I know nothing about three.js governance and goals so I might be mistaken)
When we wrote the instanced WebGL line renderer for https://count.co one of the tricky parts was switching between mitre and bevel joins based on the join angle - for very acute angles the mitre join shoots off to infinity.
Another nice extension (that we are yet to implement) is anti-aliasing, but I think that requires extra geometry to vary the opacity over.
> Another nice extension (that we are yet to implement) is anti-aliasing, but I think that requires extra geometry to vary the opacity over.
There’s a way to do it where you pass one extra vec2 from the vertex shader and use that to determine how much of the pixel is covered by the lines. (This has the effect of thinning the line, so you may want to thicken it to compensate)
I am also rendering round caps in the fragment shader. You can then render multiple lines with 1 draw call. The only problem is drawing transparent lines, because of the overlap between segments.
I’m also interested in adding anti-aliasing to my implementation. Maybe widening the line by a bit and fading opacity over that? But yeah, extra geometry, it seems.
Heads up for anyone reading this on chrome, the examples may not render with a `Cannot read property '_gl' of null` exception; firefox worked fine when I tried it.
Hmmmmm, that's kind of interesting. You're seeing the sort of crash you get when the browser's decided your system can't handle WebGL, and turned it off. Normally that would translate to "what GPU are you using, it's probably too old" -- except Firefox is apparently fine.
I'll bet chrome://gpu either shows "WebGL: disabled" or "WebGL: enabled; WebGL2: disabled". I think `ignore-gpu-blocklist` in chrome://flags should affect WebGL.
FWIW I'm running Chrome on Linux with hardware rendering force/explicitly enabled for both video and rasterization (`enable-gpu-rasterization` in chrome://flags - don't think this affects WebGL), and it all works great (notwithstanding terrible thermal design) on this fairly old HP laptop w/ i5 + HD Graphics 4000. (The GPU process does admittedly occasionally hang and need killing so it restarts, but that's about it.)
Getting video decode requires --enable-features=VaapiVideoDecoder on the commandline as well, or at least it did in the last version of Chrome, I haven't checked if this is no longer required.
If poking `ignore-gpu-blocklist` doesn't work, what does chrome://gpu show in Chrome, what does about:support (and possibly about:gpu? not sure) show in Firefox, and what GPU and OS are you using?
Nit, The article mentions things like GL_LINES, GL_LINE_STRIP. Those are OpenGL not WebGL. It might seem unimportant to someone that knows both APIs but to someone that only knows or is learning WebGL it can be extremely confusing
One thing you should do is try to limit very short and small segments, as GPUs don’t like rendering 1px triangles.
Still, I’ve been lead to understand that the instanced rendering done in this article is suboptimal on most GPUs. A better way to do it is apparently to put the points into a (non-vertex) buffer and do (batched) drawcalls for large numbers of vertices with a kind of manual instancing. Basically looking up what amounts to instance data using gl_VertexInput/6 and vertex data using gl_VertexIndex%6 (or 4 instead of 6 for indexes rendering).
Unfortunately I haven’t had the time to implement and test this yet.
If like me you’re also rendering lines with perspective, you want to also look at Freya Holmer's work. Particularly line thinness fading is important to reduce shimmer/aliasing. This means basically keep the width at a minimum of 1px and adjusting opacity below that instead.
Edit: For those interested, mine is implemented as a plugin to the Bevy game engine, written in Rust. Bevy can be found here https://github.com/bevyengine/bevy. The plugin is https://github.com/ForesightMiningSoftwareCorporation/bevy_p....