Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised that Gemini 2.0 is first now. I remember that Google models were under performing on kagi benchmarks.



Having your own hardware to run LLMs will pay dividends. Despite getting off on the wrong foot, I still believe Google is best positioned to run away with the AI lead, solely because they are not beholden to Nvidia and not stuck with a 3rd party cloud provider. They are the only AI team that is top to bottom in-house.


I've used gemini for it's large context window before. It's a great model. But specifically in this benchmark it has always scored very low. So I wonder what has changed.


I don't know, but very recent Gemini models have certainly seemed much more impressive...and became my daily.


We should still wait around to see if Huawei is able to perfect its Ascend series for training and inferencing SOTA models.


This is a great take


Gemini 2 is really good, and insanely fast.


It's also insanely cheap.


It is, but in this benchmark gemini scored very poorly in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: