I wonder how long the myth of AI firms losing money on inference will persist. Feels like the majority of the evidence points to good margins on that side
> I wonder how long the myth of AI firms losing money on inference will persist. Feels like the majority of the evidence points to good margins on that side
If they're not losing money on inference, then why do they need to keep raising absurd amounts of money? Like, if inference is profitable and they're still losing lots and lots of money, then training must be absurdly expensive, which means that basically they invest in quickly depreciating capital assets (the models) so not a good business.
I think Anthropic is an interesting case study here, as most of their volume is API and they don't have a very generous free tier (unlike OpenAI).