Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How will they do this?

You can't take the free stuff away. It's on my hard drive.

They can stop releasing them, but local models aren't going anywhere.



They can't take the current open models away, but those will eventually (and I imagine, rather quickly) become obsolete for many areas of knowledge work that require relatively up to date information.


What are the hardware and software requirements for a self-hosted LLM that is akin to Claude?


Llama v3.3 70B after quantization runs reasonably well on a 24GB GPU (7900XTX or 4090) and 64GB of regular RAM. Software: https://github.com/ggerganov/llama.cpp .




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: