Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even if this doesn't replace triangles everywhere, I'm guessing it's still going to be the easiest way to generate a large volume of static art assets, which means we will see hybrid rendering pipelines.


AIUI these algorithms currently bake all of the lighting into the surface colors statically, which mostly works if the entire scene is constructed as one giant blob where nothing moves but if you wanted to render an individual NeRF asset inside an otherwise standard triangle-based pipeline then it would need to be more adaptable than that. Even if the asset itself isn't animated it would need to adapt to the local lighting at the bare minimum, which I haven't seen anyone tackle yet, the focus has been on the rendering-one-giant-static-blob problem.

For hybrid pipelines to work the splatting algorithm would probably need to output the standard G-Buffer channels (unlit surface color, normal, roughness, etc) which can then go through the same lighting pass as the triangle-based assets, rather than the splatting algorithm trying to infer lighting by itself and inevitably getting a result that's inconsistent with how the triangle-based assets are lit.

Think of those old cartoons where you could always tell when part of the scenery was going to move because the animation cel would stick out like a sore thumb against the painted background, that's the kind of illusion break you would get if the lighting isn't consistent.


For NeRF this problems exists. However, in the past it was already solved for gaussian splatting. Usually you define a normal field over the (2D) splat, This allows you to have phong shading at least.

It is not too difficult to go to a 2D normal field over the 3D gaussians..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: