Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are rumors that the K2 model Groq is serving is quantized or otherwise produces lower-quality responses than expected due to some optimization, FYI.

I tested it and the speed is incredible, though.



have they managed to remove the "output may contain mistakes" disclaimer from a single LLM yet?


Never will.

But then, same for humans yes?


>But then, same for humans yes? And? Whats your point? This is a computer. Humans make errors doing arithmetic, therefore should we not expect computers to be able to reliably perform arithmetic? No. Silly retort and a common reply from people who are suitably wowed by the current generation of AI.


This is incredibly dumb.


whats what im trying to tell you




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: