> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback.
Seems odd to release something worse than the competition. Is there a reason why google wouldn't just come out with the best the have? Are they afraid this will eat into their ad revenue if people no longer need to click on links? Or are they just not able to build and deploy something on the scale of OpenAI's GPT3?
Google has so many users that not even they have enough GPU's and TPU's to service them all.
I personally think they should use their best models, and just make it trigger very rarely. For example, only ~once per week per user (ie. 0.3% of queries).
Use a tiny model over the input query to decide if LaMBDA will do a far better job than regular search results, and only trigger in those cases where it will most benefit the user to begin with.
Seems odd to release something worse than the competition. Is there a reason why google wouldn't just come out with the best the have? Are they afraid this will eat into their ad revenue if people no longer need to click on links? Or are they just not able to build and deploy something on the scale of OpenAI's GPT3?