I appreciate that this writeup takes care to call out use cases when they help with understanding!
I do have a semi-unrelated question though: does using the recursive approach prevent it from being calculated efficiently on the GPU/compute shaders? Not that it matters; plenty of value in a CPU-bound version of a solution and especially one that is easy to understand when recursive. I was just wondering why the prominent examples used a non-recursive approach, but then I was like "oh, because they expect you to use them on the GPU". ...and then I was like "wait, is that why?"
> I do have a semi-unrelated question though: does using the recursive approach prevent it from being calculated efficiently on the GPU/compute shaders?
Historically speaking, the use of recursion in shaders and GPGPU kernels (e.g. OpenCL "C" prior to 1.2) was effectively prohibited. The shader/kernel compiler would attempt to inline function calls, since traditional GPU models had little in the way of supporting call stacks like normal CPU programs have, and thus recursion would be simply banned in the shader/kernel language.
I do have a semi-unrelated question though: does using the recursive approach prevent it from being calculated efficiently on the GPU/compute shaders? Not that it matters; plenty of value in a CPU-bound version of a solution and especially one that is easy to understand when recursive. I was just wondering why the prominent examples used a non-recursive approach, but then I was like "oh, because they expect you to use them on the GPU". ...and then I was like "wait, is that why?"