Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not OP, but I have long thought of this type of approach (underlying "hard coded" object tracking + fuzzy AI rendering) to be the next step, so I'll respond.

The problem with using equations is that they seem to have plateaued. Hardware requirements for games today keep growing, and yet every character still has that awful "plastic skin", among all the other issues, and for a lot of people (me included) this creates heavy uncanny-valley effects that makes modern games unplayable.

On the other hand, images created by image models today look fully realistic. If we assume (and I fully agree that this is a strong and optimistic assumption) that it will soon be possible to run such models in real time, and that techniques for object permanence will improve (as they keep improving at an incredible phase right now), then this might finally bring us to the next level of realism.

Even if realism is not what you're aiming for, I think it's easy to imagine how this might change the game.



You're comparing apples to oranges, holding up today's practical real-time rendering techniques against a hypothetical future neural method that runs many orders of magnitude faster than anything available today, and solves the issues of temporal stability, directability and overall robustness. If we grant "equation based" methods the same liberty then we should be looking ahead to real-time pathtracing research, which is much closer to anything practical than these pure ML experiments.

That's not to say ML doesn't have a place in the pipeline - pathtracers can pair very well with ML-driven heuristics for things like denoising, but in that case the underlying signal is still grounded in physics and the ML part is just papering over the gaps.


The question was "why does it feel more real", and I answered that - because the best AI generated images today feel more real than the best 3D renders, even when they take all the compute in the world to finish. So I can imagine that trend going forward into real-time rendering as well.

I did not claim that AI-based rendering will overcome traditional methods, and have even explicitly said that this is a heavy assumption, but explained why I see it as exciting.


I think we'll have to agree to disagree about well done 3D renders not feeling real. Movie studios still regularly underplay how much CGI they use for marketing purposes, and get away with it, because the CGI looks so utterly real that nobody even notices it until much later when the VFX vendors are allowed to give a peek behind the curtain.

e.g. Top Gun Mavericks much lauded "practical" jet shots, which were filmed on real planes, but then the pilots were isolated and composited into 100% CGI planes with the backdrops also being CGI in many cases, and huge swathes of viewers and press bought the marketing line that what they saw was all practical.


I find it odd that you're that bothered by uncanny valley effects from game rendering but apparently not by the same in image model outputs. They get little things wrong all the time and it puts me off the image almost instantly.


>Hardware requirements for games today keep growing, and yet every character still has that awful "plastic skin", among all the other issues

That's because the number of pixels to render onto keep growing. Instead of focusing on physically based animations and reactions, we chose to leap from 480p to 720p overnight, and then to 1080p in a few more years. Now we quadrupled that and want things with more fidelity with 4x the resolution of last generation.

> images created by image models today look fully realistic.

Because they aren't made in real time (I'll give the BOTD for now and say theya are "fully realistic". Even this sample here claims 100ms. Rendering at 6-7 seconds per frame isn't going to work for any consumer product at any point in gaming history.

>Even if realism is not what you're aiming for, I think it's easy to imagine how this might change the game.

not in real time rendering. I am interested to see if this can help with offline stuff, but we're already so strapped for performance withoout needing to query an "oracle" between frames.

as of now, I'm not even that convinced by the Nvidia 5X series of frame interpolation (which would only be doable by hardware manufactureres).


Todays game graphics can render in any style the artist chooses. Its actually AI that has the inability to produce a unique style.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: