> AES Encryption/Decryption (aka: every HTTPS connection out there),
that already have dedicated hardware on most of the x86 CPUs for good few years now. Fuck, I have some tiny ARM core with like 32kB of RAM somewhere that rocks AES acceleration...
> So even if GPUs are faster, they still can't hold modern movie raytraced scenes in memory, so you're kinda forced to use CPUs right now.
Can't GPUs just use system memory at performance penalty ?
> that already have dedicated hardware on most of the x86 CPUs for good few years now
Yeah, and that "dedicated hardware" is called AES-NI, which is implemented as AVX instructions.
In AVX512, they now apply to 4-blocks at a time (512-bit wide is 128-bit x 4 parallel instances). AES-NI upgrading with AVX512 is... well... a big important update to AES-NI.
AES-NI's next-generation implementation _IS_ AVX512. And it works because AES-GCM is embarrassingly parallel (apologies to all who are stuck on the sequential-only AES-CBC)
> Can't GPUs just use system memory at performance penalty ?
CPUs can access DDR4/DDR5 RAM at 50-nanoseconds. GPUs will access DDR4/DDR5 RAM at 5000-nanoseconds, 100x slower than the CPU. There's no hope for the GPU to keep up, especially since raytracing is _very_ heavy on RAM-latency. Each ray "bounce" is basically a bunch of memory-RAM checks (traversing a BVH tree).
Its just better to use a CPU if you end up using DDR4/DDR5 RAM to hold the data. There are algorithms that break up a scene into oct-trees that only hold say 8GBs worth of data, then the GPU can calculate all the light bounces within a box (and then write out the "bounces" that leave the box), etc. etc. But this is very advanced and under heavy research.
For now, its easier to just use a CPU that can access all 100GB+ and just render the scene without splitting it up. Maybe eventually these GPU oct-tree split / process within a GPU / etc. etc. subproblem / splitting will become better researched and better implemented, and GPUs will traverse System ram a bit better.
GPUs will be better eventually. But CPUs are still better at the task today.
that already have dedicated hardware on most of the x86 CPUs for good few years now. Fuck, I have some tiny ARM core with like 32kB of RAM somewhere that rocks AES acceleration...
> So even if GPUs are faster, they still can't hold modern movie raytraced scenes in memory, so you're kinda forced to use CPUs right now.
Can't GPUs just use system memory at performance penalty ?