Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In addition to the tools other people responded with, a good rule of thumb is that most local models work best* at q4 quants, meaning the memory for the model is a little over half the number of parameters, e.g. a 14b model may be 8gb. Add some more for context and maybe you want 10gb VRAM for a 14gb model. That will at least put you in the right ballpark for what models to consider for your hardware.

(*best performance/size ratio, generally if the model easily fits at q4 you're better off going to a higher parameter count than going for a larger quant, and vice versa)



> maybe you want 10gb VRAM for a 14gb model

... or if you have Apple hardware with their unified memory, whatever the assholes soldered in is your limit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: