Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seconded, his "Ray Tracing in One Weekend" is great. Simple, and so satisfying once you start producing images.


Since we're on the subject ... is real-time tracing feasible in the next decade? i.e. if we had a load of cores, can we parallelize the heck out of it?


>> is real-time tracing feasible in the next decade?

It's feasible today, it just depends on what quality level, scene complexity, and frame rate you're looking for. I can trace a the standard bunny model with 2 light sources in a 640x480 window at >10 FPS on my dual core AMD64 from 2005. The problem is that we want better surface models, global illumination, and higher resolution. OTOH, we can get 8 core 3GHz+ processors today so that make simple renderings go pretty well. You should be able to render very complex geometry at HD resolution without any lighting effects at very interactive rates, but if that's all you want, just throw triangles at OpenGL.


Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL (at least code that is not significantly optimized). If you want 60fps VR, that's 16ms of latency for everything including rendering. In fact, if the user moves their head, there might even be a tighter deadline (I don't know what the number is but I think this is called motion-to-photon latency).


>> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL

That's correct - you CAN get more realistic images. I guess what I was trying to say is that the better (photo realistic) rendering quality is not real-time yet on high resolution displays, but simple rendering using ray tracing techniques can be near real time. But if all you want is simple rendering with phong shading you're probably not going to bother with ray tracing.


> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL

With basic (i.e. Whitted) ray tracing, you get shadows, reflection, and refraction. You can do that in OpenGL too, but it's more work and you have to do go through some unnatural contortions and/or use approximations that might look convincing but aren't physically accurate.

Soft shadows, focal blur, and motion blur can be supported by tracing more rays per pixel.

The big leap in realism (the kind where you would say ray tracing is definitely better than what you would see in a modern game) comes when you add global illumination, which is computationally a lot harder than basic ray tracing because it requires a large number of rays per pixel. It works by random sampling, so you can generate a blurry, grainy image fairly quickly, but noise-free images take a lot longer.


It's been feasible for about a decade on modest hardware, just not at super-high resolutions and framerates that most people expect from modern GPUs. (Outbound was a sort of a technology demo/student project game that I remember being sluggish but playable on a Core 2 Duo.)

Also, plain raytracing can look kind of bland. Most global illumination algorithms are based on ray tracing, but require tracing a very large number of rays. So, really the question is more whether we can get to real-time path tracing, which is a harder problem.

Another problem is tooling. Game developers know how to get good performance on GPUs, but ray-tracers have completely different performance characteristics. Re-building a bounding volume hierarchy is, in general, O(NlogN), so you have to be careful about partitioning your scene into things that move/deform and things that don't and only rebuild the parts of the BVH that need updating.


I remember reading a while ago some articles about Lucas Films real-time rendering feature, planed for their future flicks. Actually, here's one (of the articles): http://www.loopinsight.com/2013/09/24/lucasfilm-pushes-the-b...

For open-source, I know that Blender uses Cycles, which fires ray-tracing renderings in the normal views on each change, and this so is for quite a while already.


Blender only runs raytraced previews when you set the view mode to "Rendered," not in normal operation. It's tough to work in because you don't get any of the usual UI like selection highlight; it's just a straight render defaulted to relatively low quality.

Great for a quick preview though, especially while setting up lighting. And if you've been good with naming your objects you can always select things out of the document tree.


The problem is that as ray tracing advances, so do the hacks on top of rasterization to enable more realisticish images. So rasterization keeps winning. Someday, though...


Judge for yourself with these nVidia demos

https://www.google.com/search?q=nvidia+real+time+raytracing&...



Oh you young people. Next Decade? Try 24 years ago. A little game called Wolfenstein 3D used raycasting.

(Yes I know the subtleties that distinguish ray[tracing|marching|casting].)


> (Yes I know the subtleties that distinguish ray[tracing|marching|casting].)

Even worse than not knowing them, you think you do and have no idea.



It seems to me that all physics simulation is highly parallelizable because, well, physics is parallel :)

But please correct me if I'm wrong.


Physics is also very linear. You cannot know what will happen at time t+1 until you know exactly what happened in time t.


Not exactly true, there are subcases where you can isolate islands of objects and be sure they wouldn't interact.


The computation is, but the scene description isn't. In the general case, any ray can hit any object from any intersection, which means effective use of cache lines is the hard bit.


There is also "Second Week" released which I greatly recommend and the author is also very responsive via email.


And also "For the rest of your life" which is part three.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: