Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback.

Seems odd to release something worse than the competition. Is there a reason why google wouldn't just come out with the best the have? Are they afraid this will eat into their ad revenue if people no longer need to click on links? Or are they just not able to build and deploy something on the scale of OpenAI's GPT3?



Google has so many users that not even they have enough GPU's and TPU's to service them all.

I personally think they should use their best models, and just make it trigger very rarely. For example, only ~once per week per user (ie. 0.3% of queries).

Use a tiny model over the input query to decide if LaMBDA will do a far better job than regular search results, and only trigger in those cases where it will most benefit the user to begin with.


> Seems odd to release something worse than the competition.

This isn't actually what it says? It's saying that it's a smaller model version of Lamda, there's no comparison to GPT-3.


> Is there a reason why google wouldn't just come out with the best the have?

Given that ChatGPT has hit scaling issues, a faster model with higher uptime is actually now a plus assuming quality is the same.


> Is there a reason why google wouldn't just come out with the best the have?

They literally stated the reason in the sentence you quoted.


Same reason Gmail/Gdrive wouldn't give everyone 100TB for free if a startup came out that did.


They're likely not willing to stomach suddenly being unprofitable again for a few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: