Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you tried to write a kernel for basic matrix multiplication? Because I have and I can assure you it is very hard to get 50% of maximum FLOPs, let alone 90%. It is nothing like CPUs where you write a * b in C and get 99% of the performance by the compiler.

Here is an example of how hard it is: https://siboehm.com/articles/22/CUDA-MMM

And this is just basic matrix mult. If you add activation functions it will slow down even more. There is nothing easy about GPU programming, if you care about performance. CUDA gives you all that optimization on a plate.



Well, CUDA gives you a whole programming language where you have to figure out the optimization for your particular card's cache size and bus width.

I'm saying the API surface of what to offer for LLMs is pretty small. Yeah, optimizing it is hard but it's "one really smart person works for a few weeks" hard, and most of the tiling techniques are public. Speaking of which, thanks for that blog post, off to read it now.


it's "one really smart person works for a few weeks" hard

AMD should hire that one really smart person.


yeah they really should. the primary reason AMD or behind in the GPU space is that they massively under-prioritize software.


Not having written one of these (…well I've written an IDCT) I can imagine it getting complicated if there's any known sparsity to take advantage of.


I assure you from experience that it's more than a smart person for a few weeks.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: