Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be nice to add some attribution but llama.cpp is MIT licensed so what Ollama is doing is perfectly acceptable. Also, Ollama is open source (also MIT). You can bet any for-profit people using llama.cpp under the hood aren't going to mention it, and while I think we should hold open source projects to a slightly higher standard this isn't really beyond the pale for me.

While you find the value-add to be "marginal" I wouldn't agree. In the linked comment you say "setting up llama.cpp locally is quite easy and well documented" ok, but it's still nowhere near as fast/easy to setup as Ollama, I know, I've done both.



Running make vs go build? I don't see much difference

I personally settled on the text-generation-webui




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: