Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Although memory capacity may matter more than speed for inference. As long as you're not training or fine tuning, the mac pro / studio may be just fine.

apart from the fact that you can't use any of the many nvidia-specific things; if you're dependent on cuda, nvcuvid, AMP or other things that's a hard no.



What are the current best ML language models to play with on the M1Max?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: