Not OP, but I've been thinking about this and concluded it's not quite so clear-cut. If I was going to go down this path, I think I would bet on competitors, rather than against incumbents.
My thinking: In a financial system collapse (a la The Big Short), the assets under analysis are themselves the things of value. Whereas betting on AI to collapse a technology business is at least one step removed from actual valuation, even assuming:
1. AI Agents do deliver just enough, and stay around long enough, for big corporations to lay off large number of employees
2. After doing so, AI quickly becomes prohibitively expensive for the business
3. The combination of the above factors tank business productivity
In the event of a perfect black swan, the trouble is that it's not actually clear that this combination of factors would result in concrete valuation drops. The business just "doesn't ship as much" or "ships more slowly". This is bad, but it's only really bad if you have competitors that can genuinely capitalise on that stall.
An example immediately on-hand: for non-AI reasons, the latest rumors are that Apple's next round of Macbook Pros will be delayed. This sucks. But isn't particularly damaging to the company's stock price because there isn't really a competitor in the market that can capitalise on that delay in a meaningful way.
Similarly, I couldn't really tell you what the most recent non-AI software features shipped by Netflix or Facebook or X actually were. How would I know if they're struggling internally and have stopped shipping features because AI is too expensive and all their devs were laid off?
I guess if you're looking for a severe black swan to bet against AI Agents in general, you'd need to find a company that was so entrenched and so completely committed to and dependent on AI that they could not financially survive a shock like that AND they're in a space where competitors will immediately seize advantage.
Don't get me wrong though, even if there's no opportunity to actually bet against that situation, it will still suck for literally everyone if it eventuates.
It’s tricky to predict how AI impacts business value directly. You might want to check out MailsAI for insights on how companies adjust their strategies in uncertain times. It helped me see the bigger picture without getting lost in assumptions.
If you want to bet on a competitor, let's talk cause I'm your guy. While everyone else was looking the other way, I stole home: https://github.com/bablr-lang
shorting only works if people realise it when you do. c-suite will run out of make up before admitting its a pig because the pay off is huge for them. I reckon agentic dev can function "just enough" to allow them to delay the reality for a bit while they fire more of their engineering team.
I don't think this one is worth shorting because there's no specific event to trigger the mindshare to start moving and validating your position. You'd have to wait for very big public failures before the herd start to move.
While true, the world doesn't end in 2025. While I would also agree that big financial benefits from agents to companies appear unlikely to arrive this year (and the title specifically mentions 2025) I would bet on agents becoming a disruptive technology in the next 5-10 years. My 2c.
Just empirical observations. It takes time to propagate technology down to general businesses and business methods up to technology developers. The "propagate down to business methods" is the slower path, as it requires business leaders to become familiar enough with technology to get ideas on how to leverage it.
This is not a new observation -- Clark's note on overestimating short term and underestimating long term impact of technology is one of my favorite patterns. My 2c.
This is what I try to explain to people who ask "If LLMs are so good why haven't they replaced workers?". Well it takes a long time for the railroads to be built. What use is a locomotive without rails?
Claude Code is impressive but it still produces quite a bit of garbage in my experience, and coding agents are likely to be the best agents around for the foreseeable future.
Shorting is rarely worth it without detailed information, because you also have to get the timing right. If you short AI now but it crashes in two years, chances are good that you lost a lot of money.
I think it's just written by someone who reads a lot of LLM output - lots of lists with bolded prefixes. Maybe there was some AI-assistance (or a lot), but I didn't get the impression that it was AI-generated as a whole.
The thing that sucks about it is maybe his english is bad (not his native language) so he relies on LLM output for his posts. Im inclined to cut people slack for this. But the rub is that it is indistinguishable from spam/slop generated for marketing/ads/whatever.
Or it's possible that he is one of those people that _realy_ adopted LLMs into _all_ their workflow, I guess, and he thinks the output is good enough as is, because it captured his general points?
LLMs have certainly damaged trust in general internet reading now, that's for sure.
I don't know why you do. I found the article interesting, derived value from it. I don't care if it's an LLM or a human that gave me the value. I don't see why it should matter.
It matters to me for so many reasons that I can't go over them all here. Maybe we have different priorities, and that's fine.
One reason why LLM generated text bothers me is because there's no conscious, coherent mind behind it. There's no communicative intent because language models are inherently incapable of it. When I read a blog post, I subconsciously create a mental model of the author, deduce what kind of common ground we might have and use this understanding to interpret the text. When I learn that an LLM generated a text I've read, that mental model shatters and I feel like I was lied to. It was just a machine pretending to be a human, and my time and attention could've been used to read something written by a living being.
I read blogs to learn about the thoughts of other humans. If I wanted to know what an LLM thought about the state of vibe coding, I could just ask one at any time.
> AI tools aren't perfect yet. They sometimes make mistakes, and they can't always understand what you are trying to do. But they're getting better all the time, In the future, they will be more powerful and helpful. They'll be able to understand your code even better, and they'll be able to generate even more creative ideas.
I’m at this stage where I’m fine with AI generated content. Sure, the verbosity sucks - but there’s an interesting idea here, but make it clear that you’ve used AI, and show your prompts.