Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can run a model with substantially similar capabilities to Claude or ChatGPT locally

I am all for local models, but this is massively overselling what they are capable of on common consumer hardware (32GB RAM).

If you are interested in what your hardware can pull off, find the top-ranking ~30b models on lmarena.ai and initiate a direct chat with them on the same site. Pose your common questions and see if they are answered to your satisfaction.



Two points: 1) I actually think that smaller models are substantially similar to frontier models. Of course the latter are more capable, but they’re more similar than different (which I think the ELO scores on lmarena.ai suggests).

2) You can run much larger models on Apple Silicon with surprisingly decent speed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: