Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're comparing apples to oranges, holding up today's practical real-time rendering techniques against a hypothetical future neural method that runs many orders of magnitude faster than anything available today, and solves the issues of temporal stability, directability and overall robustness. If we grant "equation based" methods the same liberty then we should be looking ahead to real-time pathtracing research, which is much closer to anything practical than these pure ML experiments.

That's not to say ML doesn't have a place in the pipeline - pathtracers can pair very well with ML-driven heuristics for things like denoising, but in that case the underlying signal is still grounded in physics and the ML part is just papering over the gaps.



The question was "why does it feel more real", and I answered that - because the best AI generated images today feel more real than the best 3D renders, even when they take all the compute in the world to finish. So I can imagine that trend going forward into real-time rendering as well.

I did not claim that AI-based rendering will overcome traditional methods, and have even explicitly said that this is a heavy assumption, but explained why I see it as exciting.


I think we'll have to agree to disagree about well done 3D renders not feeling real. Movie studios still regularly underplay how much CGI they use for marketing purposes, and get away with it, because the CGI looks so utterly real that nobody even notices it until much later when the VFX vendors are allowed to give a peek behind the curtain.

e.g. Top Gun Mavericks much lauded "practical" jet shots, which were filmed on real planes, but then the pilots were isolated and composited into 100% CGI planes with the backdrops also being CGI in many cases, and huge swathes of viewers and press bought the marketing line that what they saw was all practical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: