Anecdotally, I had the opposite experience. I've wanted to dabble in parallel/gpu programming for a while but the fact that all the material forced me to care about triangles and matrix transforms for nontrivial case examples turned me off.
I've recently been playing with WebGPU and while it's still a bit of a boilerplate nightmare (which I wouldn't presume to know how to do better).. it was far more approachable.
Wrapping my head around buffers and layouts, and the fact that wgsl is a huge pain in the neck to debug, took a while.
I've built myself a really rudimentary perlin noise generator using compute shaders, and managed to pipe that into a rendering shader that uses two triangles to render part of the noise field onto a canvas really smoothly.
Trying to do some fancier compute stuff now, and overall the primitives are relatively straightforward. It's just that the documentation around the binding types and layouts, resource limits, and command-encoder/pipeline operational semantics are poor right now.
I've recently been playing with WebGPU and while it's still a bit of a boilerplate nightmare (which I wouldn't presume to know how to do better).. it was far more approachable.
Wrapping my head around buffers and layouts, and the fact that wgsl is a huge pain in the neck to debug, took a while.
I've built myself a really rudimentary perlin noise generator using compute shaders, and managed to pipe that into a rendering shader that uses two triangles to render part of the noise field onto a canvas really smoothly.
Trying to do some fancier compute stuff now, and overall the primitives are relatively straightforward. It's just that the documentation around the binding types and layouts, resource limits, and command-encoder/pipeline operational semantics are poor right now.