Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> > Since they called out a specific amount of memory that is entirely irrelevant to anyone actually running 7B models, I was responding to that.

> Which is correct, fp16 takes two bytes per weight, so it will be 7 billion * 2 bytes which is exactly 14GB.

As I said, it is "entirely irrelevant", which is the exact wording I used. Nowhere did I say that the calculation was wrong for fp16. Irrelevant numbers like that can be misleading to people unfamiliar with the subject matter.

No one is deploying LLMs to end users at fp16. It would be a huge waste and provide a bad experience. This discussion is about Copilot+, which is all about managed AI experiences that "just work" for the end user. Professional-grade stuff, and I believe Microsoft has good enough engineers to know better than to deploy fp16 LLMs to end users.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: