Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> How does that work exactly? Do you have a link?

https://ollama.com lets you run models on your own hardware and serve them over a network. The you point your editor at that server, eg https://zed.dev/docs/ai/configuration#ollama



Don't use Ollama, use llama.cpp instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: