Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agree - seems like the rumours and mutterings were true: This model is very, very good.

Quite a few happy people at Google today I bet.

Which leads me to wonder, it's not like the Gemini 2 models were terrible either - they consistently were in top 5 if not top 3, now they've smashed past everything with a +40 elo.

Are we starting to see Google apply their compute/resources/data/money to assert dominance? What next from the recently-pretty-quiet Open AI? Are we getting to the stage where well-funded startups like Anthropic et al simply cannot compete with "google-scale" for general purpose models and end up as coding-only niche models? Sure you can throw GPUs at the problem and burn more investor cash, but are Google starting to run away with it with their data and infrastructure advantages? Who even comes close when you factor in data? Meta are the only people I can think of, but their data must be quite narrow (basically social graph and short-form videos and ad click data?)

Exciting times.



I think if they could have been on top earlier, they would have been. They’ve been struggling to catch anthropic and OpenAI’s lead and they finally did it (for now), probably due to TPU superiority plus some secret sauce of some kind. Good! More competition means better service for the consumer.


That sounds plausible. Google have two advantages: 1. They do their own capital allocation, 2. TPUs. That likely means that they can execute more training runs in parallel, experiment more, and release when they hit a crack of gold. Independent labs that depend on outside investment have to carefully trade off experiments. Hence Stargate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: