Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the grandparent article:

> Well, for 40 bouncing circles, on a 700x500 grid, that would be on the order of 14 million operations. If we want to have a nice smooth 60fps animation, that would be 840 million operations per second. JavaScript engines may be fast nowadays, but not that fast.

The math is super-cool, and efficiency is important for finding isosurfaces in higher dimensions, but those aren't really scary numbers for normal programs. Just tinting the screen at 2880x1800 is ~2 million operations per frame. GPUs can handle it.

A simple way to render is to draw a quad for the metaball, using the metaball kernel function in the fragment shader. Use additive blending while rendering to a texture for the first pass, then render the texture to screen with thresholding for the second pass. The end result is per-pixel sampling of the isosurface.

Admittedly, it's kind of a brute-force solution, but even the integrated GPU on my laptop can render thousands of metaballs like that at HiDPI resolutions.

(Specifically, I use a Gaussian kernel for my metaballs. It requires exp, which is more expensive computationally than a few multiplies. I render 1500 of them at 2880x1671 at 5ms per frame on an Intel Iris Pro [Haswell].)

Though, the work scales with fragment count, so a few large metaballs may be as costly many smaller ones. For large numbers of metaballs, you probably also want to use instancing so you'd need OpenGL ES 3.0 / WebGL 2.0 which are fairly recent.

But 40 metaballs with a simple kernel at 700x500? That's easy for a GPU.



I believe 2D canvas rendering is performed on the CPU rather than the GPU.


You wouldn't use a 2d context, you can use WebGL shaders instead. Besides that, most operations on a 2d context are performed on the GPU.


Most is done on the GPU nowadays - only getImageData() and putImageData() has to go via the CPU.

You can BTW also "cheat" your way to meta-balls: https://stackoverflow.com/questions/17177748/increasing-real...


much of it is offloaded to the GPU by recent browsers


The important bit is getting the metaball function into the fragment shader. I'm not really a web guy, but I know you can do that with WebGL.

For a canvas with a more limited API, you can still do it if images are GPU accelerated with a composite mode like "lighter". If that's the case, you can do basically the same thing by first rendering the metaball function to an image once, and then drawing that image for each metaball. Doing it via an image introduces extra aliasing artifacts, but might get around the API limitations.

Edit: I suppose you would still want to find a GPU-accelerated threshold function for the step after that.


I remember what I did to optimize my own metaballs: after rendering all different sizes that were used on their own screen at (0, 0), I would only add the values at relative positions (signs removed) of their respective preset map.

It does take some memory, but n operations per pixel for each frame for n balls plus the overhead of transforming into actual color was still pretty great. Instead of saying "GPU can do this", I'd rather ask, hey, can we do even better than that?


I used to do that, but sampling from a texture seems to be about as expensive as directly calculating the metaball function per fragment.

It is cool to see what we can do. That is one of the things I really like about winkerVSbecks' approach. It's interesting and different. Better for some uses, too, which is always nice to see.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: