Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see.

I can't give any anecdotal evidence on ChatGPT/Gemini/Bard, but I've been running small LLMs locally over the past few months and have amazing experience with these two models:

- https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B (general usage)

- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instr... (coding)

OpenChat 3.5 is also very good for general usage, but IMO NeuralHermes surpassed it significantly, so I switched a few days ago.



Thank you for the suggestions – really helpful for my hobby project. Can't run anything bigger than 7B on my local setup, which is a fun constraint to play with.


Thanks! I’ve had a good experience with the deepseek-coder:33b so maybe they’re on to something.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: