Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you using a local model with Continue?


I've tried running 3B LLAMA on my old machine and it was sluggish and kinda bad, but Claude 3.5 Sonnet works great. I will try the 70B model, should be good enough.


I didn't have much success with ollama either. You really need a great accelerator for it to be useful enough. Codestral however is working out great so thanks for the tip.


Yeah, I’ve found local models are completely useless for coding tasks whereas openai and Claude can at least do some refactors with guidance




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: