I think you are assuming we are talking about swapping API usage from one model to another. That is not what happened. A specific product doing a specific thing uses less energy now.
To clarify: the way models become more efficient is usually by training a new one with a new architecture, quantization, etc.
This is analogous to making a computer more efficient by putting a new CPU in it. It would be completely normal to say that you made the computer more efficient, even though you've actually swapped out the hardware.
Don’t they call all their LLM models Gemini? The paper indicates that they specifically used all the AI models to come up with this figure when they describe the methodology. It looks like they even include classification and search models in this estimate.
I’m inclined to believe that they are issuing a misleading figure here, myself.
They reuse the word here for a product, not a model. It's the name of a specific product surface. There is no single model and the models used change over time and for different requests
I would assume so. One important trend is that models have gotten more intelligent for the same size, so for a given product you can use a smaller model.
Again this is pretty similar to how CPUs have changed
> Figure 4: Median Gemini Apps text prompt emissions over
time—broken down by Scope 2 MB emissions (top) and Scope
1+3 emissions (bottom). Over 12 months, we see that AI model
efficiency efforts have led to a 47x reduction in the Scope 2
MB emissions per prompt, and 36x reduction in the Scope 1+3
emissions per user prompt—equivalent to a 44x reduction in total emissions per prompt.
Again, it's talking about "median Gemini" while being very careful not to name any specific numbers for any specific models.
You're grouping those words wrong. As another commenter pointed out to you, which you ignored, it's median (Gemini Apps) not (median Gemini) Apps. Gemini Apps is a highly specific thing — with a legal definition even iirc — that does not include search, and encompasses a list of models you can actually see and know.
I didn't ignore it, I actually spent some time researching to find out what Google means by "Gemini Apps" (plural) and whether it includes search AI overview, and I can't get a clear answer anywhere.
Of course, Gemini App (singular) means the mobile app. But it seems that the term Gemini Apps (plural) is being used by Google to refer to any way in which users can access the Gemini models, and also they do clearly state that a version of Gemini isused to generate the search overviews.
So it still seems reasonably likely, until they confirm otherwise, that this median includes search overview.
No, because unless they state otherwise we should assume that they consider search overview to be an AI assistant (they definitely believe this) and also that it's one of the Gemini Apps.
Look, there's not enough information to answer this within the paper. I'm not willing to give Google the benefit of the doubt on vague language, and you are. I'm assuming they're a huge basicappy evil corporation whose every publication is gone over and reworded by marketing to make them look good, and you're assuming... whatever.
The median does not move if the upper tail shifts, it only moves if the median moves.
The fact that they do not report the mean is concerning. The mean captures the entire distribution and could actually be used to calculate the expected value of energy used.
The median only tells you which point separates the upper half from the lower half, if you don't know anything else about the distribution you cannot use it for any kind of analysis.
I can't copy text from that pdf on my phone, but the paragraph above says exactly what you'd expect: they're using a "median" value from a "typical user" across all Gemini models. While being very careful not to list the specific models which are used to calculate this median, because it almost certainly includes the tiny model used to show AI summaries on google.com, which would massively skew the median value. As someone above said, it's like adding 8 extra meals of a single lettuce leaf and then claiming you reduced the median caloric intake of your meals.
What? The paper clearly says "This section presents the environmental impact metrics for the Gemini Apps AI assistant". You are going through lots of hoops instead of just reading the paper.
I think you are assuming we are talking about swapping API usage from one model to another. That is not what happened. A specific product doing a specific thing uses less energy now.
To clarify: the way models become more efficient is usually by training a new one with a new architecture, quantization, etc.
This is analogous to making a computer more efficient by putting a new CPU in it. It would be completely normal to say that you made the computer more efficient, even though you've actually swapped out the hardware.