For primary visibility, you don't need more than 1 sample. All it is is a simple "send ray from camera, stop on first hit, done". No monte carlo needed, no noise.
On recent hardware, for some scenes, I've heard of primary visibility being faster to raytrace than rasterize.
The main reasons why games are currently using raster for primary visibility:
1. They already have a raster pipeline in their engine, have special geometry paths that only work in raster (e.g. Nanite), or want to support GPUs without any raytracing capability and need to ship a raster pipeline anyways, and so might as well just use raster for primary visibility.
2. Acceleration structure building and memory usage is a big, unsolved problem at the moment. Unlike with raster, there aren't existing solutions like LODs, streaming, compression, frustum/occlusion culling, etc to keep memory and computation costs down. Not to mention that updating acceleration structures every time something moves or deforms is a really big cost. So games are using low-resolution "proxy" meshes for raytracing lighting, and using their existing high-resolution meshes for rasterization of primary visibility. You can then apply your low(relative) quality lighting to your high quality visibility and get a good overall image.
Nvidia's recent extensions and blackwell hardware are changing the calculus though. Their partitioned TLAS extension lowers the acceleration structure build cost when moving objects around, their BLAS extension allows for LOD/streaming solutions to keep memory usage down as well as cheaper deformation for things like skinned meshes since you don't have to rebuild the entire BLAS, and blackwell has special compression for BLAS clusters to further reduce memory usage. I expect more games in the ~near future (remember games take 4+ years of development, and they have to account for people on low-end and older hardware) to move to raytracing primary visibility, and ditching raster entirely.
For primary visibility, you don't need more than 1 sample. All it is is a simple "send ray from camera, stop on first hit, done". No monte carlo needed, no noise.
On recent hardware, for some scenes, I've heard of primary visibility being faster to raytrace than rasterize.
The main reasons why games are currently using raster for primary visibility:
1. They already have a raster pipeline in their engine, have special geometry paths that only work in raster (e.g. Nanite), or want to support GPUs without any raytracing capability and need to ship a raster pipeline anyways, and so might as well just use raster for primary visibility. 2. Acceleration structure building and memory usage is a big, unsolved problem at the moment. Unlike with raster, there aren't existing solutions like LODs, streaming, compression, frustum/occlusion culling, etc to keep memory and computation costs down. Not to mention that updating acceleration structures every time something moves or deforms is a really big cost. So games are using low-resolution "proxy" meshes for raytracing lighting, and using their existing high-resolution meshes for rasterization of primary visibility. You can then apply your low(relative) quality lighting to your high quality visibility and get a good overall image.
Nvidia's recent extensions and blackwell hardware are changing the calculus though. Their partitioned TLAS extension lowers the acceleration structure build cost when moving objects around, their BLAS extension allows for LOD/streaming solutions to keep memory usage down as well as cheaper deformation for things like skinned meshes since you don't have to rebuild the entire BLAS, and blackwell has special compression for BLAS clusters to further reduce memory usage. I expect more games in the ~near future (remember games take 4+ years of development, and they have to account for people on low-end and older hardware) to move to raytracing primary visibility, and ditching raster entirely.