Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

in my case with a 6900xt:

1. sudo pacman -S ollama-rocm

2. ollama serve

3. ollama run deepseek-r1:32b



Does that entire model fit in gpu memory? How's it run?

I tried running a model larger than ram size and it loads some layers into the gpu but offloads to the cpu also. It's faster than cpu alone for me, but not by a lot.


you're right, actually noticed gpu clocking up and down with 32b, 14b clocks up fully and actually runs faster


Nice, last time I tried out ROCm on Arch a few years ago it was a nightmare. Glad to see it's just one package install away these days, assuming you didn't do any setup beforehand.


I think you do still have to have the ROCm drivers installed, but it's not very hard to do from AMD's website.


everything from arch repos, well cachyos and arch :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: