Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPUs are still an unworkable target for wide end user audiences because of all the fragmentation, mutually incompatible APIs on macOS/Windows/Linux, proprietary languages, poor dev experience, buggy driver stacks etc.

Not to mention a host of other smaller problems (eg no standard way to write tightly coupled CPU/GPU codes, spotty virtualization support in GPUs, lack integation in estabilished high level languages, etc chilling factors).

The ML niche that can require speficic kinds of NVidia GPUs seems to be an island of its own that works for some things, but it's not great.



While true, it is still easier to write shader code than trying to understand the low level details of SIMD and similar instruction sets, that are only exposed in a few selected languages.

Even JavaScript has easier ways to call into GPU code than exposing vector instructions.


Yes, one is easier to write and the other is easier to ship, except for WebGL.

The JS/browser angle has another GPU related parallel here. WebAssembly SIMD is is shipping since a couple of years and like WebGL make the browser platform one of the few portable ways to access this parallel-programming functionality now.

(But functionality is limited to approximately same as the 1999 vintage x86 SSE1)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: