How is it that cloud LLMs can be so much cheaper? Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud.
Is it possible that this is an AI bubble subsidy where we are actually getting it below cost?
Of course for conventional compute cloud markup is ludicrous, so maybe this is just cloud economy of scale with a much smaller markup.
1. Economies of scale. Cloud providers are using clusters in the tens of thousands of GPUs. I think they are able to run inference much more efficiently than you would be able to in a single cluster just built for your needs.
2. As you mentioned, they are selling at a loss. OpenAI is hugely unprofitable, and they reportedly lose money on every query.
I think batch processing of many requests is cheaper. As each layer of the model is loaded into cache, you can put through many prompts. Running it locally you don't have that benefit.
They are specifically referring to usage of APIs where you just pay by the token, not by compute. In this case, you aren’t paying for capacity at all, just usage.
"Sharing between users" doesn't make it cheaper. It makes it more expensive due to the inherent inefficiencies of switching user contexts. (Unless your sales people are doing some underhanded market segmentation trickery, of course.)
No, batched inference can work very well. Depending on architecture, you can get 100x or even more tokens out of the system if you feed it multiple requests in parallel.
Of course that doesn't map well to an individual chatting with a chat bot. It does map well to something like "hey, laptop, summarize these 10,000 documents."
Yes, and people do that. Some people get thousands of tokens per second that way, with affordable setups (eg 4x 3090). I was addressing GP who said there is no economies of scale to having multiple users.
All that, but also because they have those GPUs with crazy amounts of RAM and crazy bandwidth?
So the TPS is that much higher, but in terms of power, I guess those boards run in the same ballpark of power used by consumer GPUs?
Is it possible that this is an AI bubble subsidy where we are actually getting it below cost?
Of course for conventional compute cloud markup is ludicrous, so maybe this is just cloud economy of scale with a much smaller markup.