Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All the llama models, including the 70B one can run on consumer hardware. You might be able to fit GPT-3 (175B) at Q4 or Q3 on a Mac Studio, but that's probably the limit for consumer hardware. At 4-bit a 7B model requires some 4GB of ram, so that should probably be possible to run on a phone, just not very fast.



Gpt 3.5 turbo is 20B


I doubt that. What's your source?


There was a paper published by Microsoft that seemed to leak this detail. I'm on mobile right now and don't have a link but it should be searchable


The paper was https://arxiv.org/abs/2310.17680

It has been withdrawn with this note:

> Contains inappropriately sourced conjecture of OpenAI's ChatGPT parameter count from this http URL, a citation which was omitted. The authors do not have direct knowledge or verification of this information, and relied solely on this article, which may lead to public confusion

(the noted URL is a just a Forbes blogger with no special qualifications that would make what he claimed particularly credible).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: