I think Ed maybe buries the lede a little bit with all the ranting. About halfway through the piece he drops this chart[1] which personally left me shocked.
To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -
"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."
Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.
The fact that people are generally uncritical of this growth claim is just stunning. You have to believe that this is as big as smartphones while ignoring all of the differences with this cycle of hype.
Nobody intelligent is actually uncritical about these growth claims. What you are seeing is the other side of "the market can remain irrational longer than you can remain liquid". There's no financial benefit to calling bullshit, because timing when the market is suddenly going to correct is impossible. Rational investors are either in the market of boosting the stock to sell it in the short term (a bigger fools scam) or staying as far away from it as possible.
It doesn't have to hit the projections. It just has to keep looking like it could 2 months from now when you sell.
The market has never been determined by the actual value of the underlying tangible thing, but rather the market value of that tangible thing. Although they may seem similar they are not the same. As the market becomes increasingly detached from reality, you'll see more of this.
The vision they are selling is that AI will replace high value professionals in the economy. It's a vision that entails the collapse of the entire western societal model. If you believe in that investing in them becomes a existential matter, and the actual immediate value doesn't matter. You simply can't afford not to invest.
The numbers are meaningless. They are just meant to evoke "we will control the economy".
ChatGPT was released Nov 2022. Who would have predicted on launch day that they'd do $1B in 2023? Or $4B in 2024? Or $13B in 2025?
This chart actually shows a substantial slowing of the growth they've already achieved over the last 2.5 years. Why are you so convinced it'll slow even faster than that?
Ed's style is very much to rant about this stuff lol. But it's really important to have critical voices right now since a miss with this stuff could have devastating consequences for the industry and the people who work in it.
I think the author might need a reality check. In almost every company the usage of AI is becoming more and more valuable. Even if progress in models freezes and halts, the current state is too valuable. I have talked to multiple people in multiple companies and LLMs are becoming indispensable at so many things. Yes chatgpt might or might not conquer the world but the enterprise usage I don't see decreasing (and I am not talking about customer facing LLM usage which might be very stupid).
The author's tone is over the top, but I think this quote is true:
> Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot.
There are very real use cases with high value, but it's not an economy-defining technology, and the total value is going to fall far short of projections. On bottom lines, the overall productivity gain from AI for most companies is almost a rounding error compared to other factors.
Part of this is because VCs are hoping for a repeat of the Web 2.0 boom, where marginal costs were zero and billions of people were buying smartphones. If you check YC’s RFS (just an example), they’re all software: https://www.ycombinator.com/rfs
Everyone asks “what if this is like the internet” but what if it’s actually like the smartphone, which took decades of small innovations to make work? If in 1980 you predicted that in 30 years handheld computers would be a trillion dollar industry you’d be right but it still required billions in R&D.
There are a ton of non-software innovations out there, they just require more than a million dollar seed to get working. For example making better batteries, better solar panels, fusion power, innovations in modular housing, etc.
Hopefully you are right, but I fear this is a very naive and premature judgement. At one point the internet was a $50B industry insisting it would be a $1T industry. It even had a bubble, bursting and burying entire companies.
Yet, $1T was nevertheless a profound underestimation.
Same could be said about all the hype and trend that died, blockchain(ignore cryptocurrency part), IoT(not sure what happened), bigData(foundation of current AI regardless of whag anyone says) , app for everything(we indeed have more apps now and everything is junk) were also considered the new water/air/electricity/revolution/disruption.
We in aggregate seem to have developed a collective amnesia due to how fast these trends move and how much is burned in keeping the hype machine going to keep us on the edge. We also need to stop calling LLMs different just like every kid wants to claim mark zuck was diff or bill gates was diff so dropping out like them would make these kids owner of next infinite riches.
After a long decade of fast moving “this will truly revolutionize everything” speech every so often, we need to keep some skepticism. Additionally, the AI bubble is more devastating than the previous as previous money was being spread into multiple hypes from which some emerged silent victors of current trends but now everything is consolidated into one thing, all eggs in one basket. If the eggs break, a large population and industry will metaphorically starve and suffer.
Is it? There's certainly $1T of other businesses built with the internet, but the internet business, itself, was rapidly commoditized. The valuable things were the applications built on it, not the network. The argument here is that nobody's found the $1T applications built on AI foundation models yet, but OpenAI is valued as if they have, because their demo chatbot took off out of peoples' curiosity and people are extrapolating that accident exponentially into the future.
The internet bubble is probably a good analogy. It took almost 20 years and several rounds of failed businesses for the internet to have the impact that was originally promised. The big internet companies of the 90s are not where the money was ultimately made.
Similarly, the current LLM vendors and cloud providers are likely not where the money will ultimately be made. Some startup 10-15 years from now will likely stack a cheaply hosted or distributed LLM with several other technologies, and create a whole new category of use cases we haven't even thought of yet, and that will actually create the new value.
Almost all of the internet build out happened between 1998 and 2008, and cost about $1T and was adding $1T to the economy annually by the end of that buildout.
This latest AI hype cycle is also about 10 years old and about $1T invested, and yet it's still a super-massive financial black hole with no economy-wide trillion dollar boost anywhere in sight.
The internet broadband, fiber, and cellular buildout changed the world significantly. This LLM buildout is doing no such thing and is unlikely to ever do so.
Approximately no one gave a flying fuck about “AI” at anything close to this scale and level of funding and hype before ChatGPT was released in Nov 2022. My non-tech friends and relatives couldn’t have named a single AI product, now they all use ChaptGPT, many of them daily and with paid accounts.
Let’s circle back in 2032 and see how much of this was “hype”.
How much is "the internet" an industry? It's an enabler and a commodity as much as electricity or road networks are. Are you counting everything using the internet as contributing a sizable share to the internet industry's value?
By the time we finished pouring a trillion dollars into the global broadband, fiber, and cellular network buildout between 1998 and 2008, the Internet was already adding a trillion dollars a year to the economy.
We've now got 10 years and about a trillion dollars invested in this latest AI bubble, and it's still a super-massive financial black hole.
Ten years and a trillion dollars can make great things like happen. AI ain't that.
There is a difference between valuable and profitable. I think anyone who wants to say there isn’t a bubble needs to solve two problems:
1) Inference is too damn expensive.
2) The models/products aren’t reliable enough.
I also personally think talking to industry folks isn’t a silver bullet. No one knows how to solve #2. No one. We can improve by either waiting for better chips or using bigger models, which has diminishing returns and makes #1 worse.
Maybe OpenAI’s next product should be selling dollars for 99 cents. They just need a billion dollars of SoftBank money, and they can do 100 billion in sales before they need to reraise. And if SoftBank agrees to buy at $1.01 the business can keep going even longer!
I think AI will be useful to industries/companies where #2 is unimportant: Where the quantity of product is far more important than its quality. Disturbingly, this describes the market for a lot of industries/companies.
It seems like every 6 months they come out with a new model that's as good as the previous generation but with a fraction of the inference cost. Inference cost for a given level of quality has been dropping fast. E.g. GPT-4.1 nano outperforms the old GPT-4 model but costs 300x less.
Right now, the API cost for asking a single question costs about a fiftieth of a cent on their cheap 4.1-nano model, up to about 2 cents on their o3 model. This is pretty affordable.
On the other end of the spectrum, if you're maxing out the context window on o3 or 4.1, it'll cost you around $0.50 to $2. Pricy, but on 4.1, this is like inputting several novels worth of text (maybe around 2000 pages).
If you're looking at it from the perspective of business sustainability (i.e. what the article is about) "we keep lowering our prices" doesn't sound so great. The question is whether GPT-4.1 nano costs OpenAI 300x less than GPT-4 to run or not. If it costs exactly that much less, that still means demand needs to grow by more than 300x just to keep revenue constant. And if it does, then total inference cost correspondingly goes up again.
That would be great news for OpenAI if they could somehow prevent people from buying their own computers. Because as inference costs come down for OpenAI, their customers also get access to better and better commodity hardware (and also better models to run on it). And commodity models become more and more capable all the time, even if they’re not the best.
There are different kinds of reliable. There is the reliability of certain failure for instance. These things are unreliable in a way that the other things that you listed are not.
Nature is reliable in that we have rules (physics) and practices (engineering) to base our process upon.
People aren't reliable, for a specific value of reliable.
We expect of technology(machines, software, AI, whatever) to be: deterministically reliable (knowing their failure modes), and significantly more reliable than humans at what they do (because that's what we rely upon, why we use them to replace humans at what they, in ways humans can't: harder, faster, stronger).
> In almost every company the usage of AI is becoming more and more valuable.
It's certainly becoming more common, and there are lots of people who want it to be valuable, and indeed believe it's valuable.
Personally, I find it about as valuable as a really, really good intellisense. Which is certainly valuable, but I feel like that's way off from the type/quality/quantity of value you're suggesting.
I also find the intellisense aspect of it good though the price is still too high when my local IDE can do 1/10th of that for a long time.
Additionally, LLMs are sort of using old day google mastery to find the right result quickly to save a huge waste of wading through junk and SEO spam, which translates to productivity but then we are again balanced out because this gained productivity was lost once SEO spam took off a decade back. I am indifferent about this gain again, as anything with mass adaptation tends to devolve in garbage behavior eventually, slowly the gains will again be eaten up.
This is the comment I wanted to write before scrolling down. Information retrieval in general is really improving computer-related tasks, but I think present or visible is a much better term to describe it than valuable.
That's besides the point, though. How are they gonna meet the shareholder's expectations of future revenue? When AI becomes equally expensive as a human, what happens then?
When AI becomes equally expensive as a human, layoff humans to continue finance AI because, it is easy to fire and rehire but expensive(more precisely acknowledging failure and bad decision making to hurt leadership ego) to remove and become independent from complex integrations.
Edit: I am just sharing how our CTO responds to the massive push of AI into everything, because integration of a non deterministic system has massive cost and eventually once the thing is made deterministic, the additional steps add expenses which finally makes the entire solution too expensive compared to the benefit. This is not my opinion, just sharing how typical leaderships hope to tackle the expense issue.
It seems like the question is whether you believe in cost-based or value-based pricing. The cost of AI for the same amount of power is going down a lot year-to-year. [1]
If market prices go down with costs, then we see something like solar power where it’s everywhere but suppliers don’t make money, not even in China.
Or maybe customers spend a lot more on more-expensive models? Hard to say.
> people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside,
It took me 20+ ranty paragraphs to realise that this guy is not, actually, an AI doomer. Dear tech writers, there's camps and sides here, and they're all very deeply convinced of being right. Please make clear which one you're on before going down the deep end in anger.
Might be a bit naive, but by the time 2029 comes around, and AI companies have started 'monetising free users', won't a lot of people/companies have open-source models tuned and tailored to their needs running locally, no provider required?
If there is anything I can expect to remain consistent over the next 30-40 years, it's that the majority of people have no interest or ability to maintain technical systems themselves, as evidenced by the often banal and extremely simplistic tasks that are required of me as an IT support technician, family, friends, etc. There absolutely will be a direction in models toward more specific tasks, but they will be packaged and distributed by channels and entities that need to monetize their services in some way. That is just inevitable. The problem was never the access to information, it was the nature of most people. (Just to be clear, I don't think this is technically a bad thing, just something I have noticed.)
I think it'll mostly still be centralized around providers
I don't see good local models happening on mobile devices anytime soon, and the majority of desktop users will use whatever is the best + most convenient. Competitive open source models running on your average laptop? Seems unlikely
Remember when kids used to learn editing or photoshop? You can do that in some remarkably capable tools online, and the kids don't bother with offline tools anymore. This is the same thing.
As other commenters have rightly pointed out - agents being a good product is orthogonal to agents being profitable. If you can start securing big contracts with enterprises by convincing them that the AI revolution is coming, that's enough for profit.
It will only be years down the road when people start realizing that they're spending millions on AI agents that are significantly less capable than human employees. But by that point OpenAI will already be filthy rich.
To put it a different way - AI is currently a gold rush. While Nvidia sells shovels, OpenAI sells automated prospecting machines. Both are good businesses to be in right now.
Absolutely. Although another sentiment also on HN is that it also doesn't matter which I find cynical when I see people use that as a pro-llm sentiment like "who cares if it doesn't work."
Granted, but you and I were both here in days when Terry Davis, may he rest in peace, was also. We have uncommon cause to know how broad a church "sentiment also on HN" may be.
Nvidia made $44B profit last year. They are functionally a monopoly and can charge whatever they want. That’s a good business.
OpenAI lost $5B last year. There are many competitors offering similar products, preventing OpenAI from extracting large profits. It isn’t a good business now. Sam Altman is promising it will be in the future.
> I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing.
I can't tell whether this man actually believes that he is the only one critiquing AI? I mean.. I can barely walk 2 feet without tripping over anti-AI blogs, posts, news articles, youtube videos or comments.
Ed's main critique is about business sustainability -- it's true that there are many articles about AI on IP issues or ethics but he is unique in actually crunching the numbers on profit.
There's Financial Times, Forbes, tons of reddit posts, youtube videos.. I suppose it's possible that he's the only blogger doing this, but as far as I can see he is not the only one crunching profit numbers on the most "visible" company in the world.
Lots of people are saying "this doesn't work very well", but I think he probably _is_ the main person banging the "this makes no financial sense whatsoever" drum; that bit does seem to get weirdly skimmed over most of the time, even by many AI sceptics.
OpenAI is a bubble. AI industry is a bubble. AI value is real (and probably underestimated). It would take at least a decade for all the relevant industries to incorporate the progress so far.
If Ed fees this strongly, he should short NVDA and laugh all the way to the bank when it pops.
I think, such sayings are more and more being proven wrong. Buy the rumor, sell the news, or buy the dip and hodl long term etc have severely backfired in recent months and many individuals(I personally know) lost huge.
Until we return to ZIRP somehow, there is just too much money in AI to keep the hype going as long as possible to avoid the investor backlash or cause mass layoffs(cutting costs and recovering because we spent too much on AI but can’t get refunds so layoff people instead), as money seem to only chase hype and due to not enough free and openly available money for other topics, all flows to only thing that everyone chases.
Well, people have been saying Tesla is overvalued for years and it’s only just now started to come down. It’s still massively overvalued on PE compared to other car companies.
GameStop is still trading at 82 PE. Insane valuation. Apple and Google are money printers and only trade at 20-30.
OpenAI could very well be to this AI boom what Netscape was to the dotcom bubble. Even post dotcom crash, a lot of lasting value remained—and I believe the same will happen this time too.
The lasting value was a trillion dollars worth of broadband, fiber, and cellular build-out, the literal physical internet that was built between 1998 and 2008 and persists today, allowing for trillions of dollars of new economic activity.
The lasting value of OpenAI and all of the other frontier labs and hardware makers supporting them will be what? What have they collectively built that will outlive their doomed corporations for decades providing trillions of dollars in new economic activity?
Are all those GPU warehouses going to bolster the economy by a trillion or two every year when these AI companies are all gone? Will those ancient LLMs be adding trillions to the economy?
"The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry!"
Seems to me like Ed is making a very elementary mistake here. I don't think anyone has ever claimed the total amount of money spent on inference would decrease. Claims about inference cost are always about cost per quality of output (admittedly there is no consensus on how to measure the latter). If usage goes up faster than cost per unit goes down, then you spend more, but the point is that you're also getting more.
OpenAI has a massive brand advantage because the general public equates their products with AI.
Even if they don't 100% figure out agents they are now big enough that they can acquire those that do.
If the future is mostly about the app layer then they'll be very aggressive in consolidating the same way Facebook did with social media, see for example Windsurf.
> Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful
Even if we assume this is true, it’s worth asking.. did the promised efficiency of the advertising economy ever need to be “real” to completely transform society?
I don't agree with Ed's general style of rhetoric but everything single thing he says is important and are active topics of avoidance and hand waving from language model advocates who are also obviously also upset by what he says.
The pivot of the article is this, at the end: "There are no other hypergrowth markets left in tech."
That's the key.
Tech is, at the moment, the only growth engine for capitalism at this time, that sustains the whole world economy (before, it used to be "just" IT [2015], credit [2008], oil [1973], coal, which all have demonstrated their limits to sustain a continuous growth).
AI is the only growth engine left within tech at this moment, supported by GPU/TPU/parallelization hardware.
Given what's at stake, if/when the AI bubble bursts, if there's no alternative growth engine to jump to, the dominos effect will not be pretty.
Could say that capital has been searching for the next great productivity tech for a while now.
AI presents this opportunity, and is the only idea available that could actually work. It's no wonder that capital has flooded in to support it.
Thing is, if it succeeds at the pace and extent that the optimists are predicting then the economic disruption will be massive.. and if it doesn't succeed, and there is no other alternative available for long term productivity improvement, then the economic disruption will also be massive.
I don't have the energy to dig back through this person's old posts, but I'm curious: did they correctly project that we'd be here two years ago? Because if not, why would I care about their insufferable ranting about where things are going from here?
It’s easy to be a critic and declare that every new fad is overhyped, and you’ll mostly be right. But when you’re wrong, damn do you look foolish.
I don't know. Neither do you. Stay long NVDA, somebody'll take the other side of the bet, and eventually we'll all be able to look back and see where we started finding out who'd called it.
You might want to take a break. Siskind is entertaining, yes. So on occasion are NK Jemisin, Adrian Tchaikovsky, and on the rare occasion he can't help it, even Gabriel García Márquez. How reliable a literal or Hitch Hiker's guide to pericontemporaneous events does that make any of them?
To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -
"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."
Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.
[1] https://lh7-rt.googleusercontent.com/docsz/AD_4nXcTvV_KScCMt...
reply