I remember what I did to optimize my own metaballs: after rendering all different sizes that were used on their own screen at (0, 0), I would only add the values at relative positions (signs removed) of their respective preset map.
It does take some memory, but n operations per pixel for each frame for n balls plus the overhead of transforming into actual color was still pretty great. Instead of saying "GPU can do this", I'd rather ask, hey, can we do even better than that?
I used to do that, but sampling from a texture seems to be about as expensive as directly calculating the metaball function per fragment.
It is cool to see what we can do. That is one of the things I really like about winkerVSbecks' approach. It's interesting and different. Better for some uses, too, which is always nice to see.
It does take some memory, but n operations per pixel for each frame for n balls plus the overhead of transforming into actual color was still pretty great. Instead of saying "GPU can do this", I'd rather ask, hey, can we do even better than that?