Hacker News new | past | comments | ask | show | jobs | submit login

Maybe so. But it isn't confidence inspiring when I go to see which cards are supported and I see this issue:

https://github.com/ROCm/ROCm/issues/1714

With Nvidia cards, I know that if I buy any Nvidia card made in the last 10 years, CUDA code will run on it. Period. (Yes, different language levels require newer hardware, but Nvidia docs are quite clear about which CUDA versions require which silicon.) I have an AMD Zen3 APU with a tiny Vega in it; I ought to be able to mess around with HIP with ~zero fuss.

The will-they-won't-they and the rapidly dropped support is hurting the otherwise excellent ROCm and HIP projects. There is a huge API surface to implement and it looks like they're making rapid gains.




That's from 2022. AMDs move to start generally supporting consumer cards is very recent.


Where's the official show of support? I'll believe it when I see it.


They're listed as supported on their website and they work. I'm not sure what there is besides that.

https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...

You have to click on the "Radeon" tab for the commercial cards.

Yes, it's annoying that they only officially support Ubuntu 22.04 but it is official support and you can get other OSs and cards to work.


The article specifically is about AI. Don't most useful LLM models require too much RAM for consumer Nvidia cards and also often need those newer features, making it irrelevant that a G80 could run some sort of cuda code?

I'm not particularly optimistic that ecosystem support will ever pan out for AMD to be viable but this seems to be giving a bit too much credit to Nvidia for democratizing AI development, which is a stretch.


First of all, LLMs are not the only AI in existence. A lot of ML, stats, and compute can be run on consumer grade GPUs. There are plenty of problems that aren't even applicable with an LLM.

Second, you absolutely can run and fine tune many open source LLMs on one or more 3090s at a time..

But being able just to tinker, learn to write code, etc.. on a consumer GPU is a gateway to the more compute focused cards.


There's a difference between officially supported, and supported. My 6900XT, an unsupported card, works just fine.


Then they should indicate that! Putting me off from considering an AMD card for purchase is very detrimental to building a userbase.


I 100% agree with that. The override envar (HSA_OVERRIDE_GFX_VERSION) is also buried deep in their documentation. NVIDIA is eating AMD's breakfast with GTX3060s while they are trying to peddle 7900XTs.


Pretty sure my Radeon R9-285 would work if I force gfx802 offload arch when building for ROCm, but...what are you going to do with decade-old VRAM support? 2gb is not enough for anybody.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: