This inevitabilist framing rests on an often unspoken assumption: that LLM's will decisively outperform human capabilities in myriad domains. If that assumption holds true, then the inevitabilist quotes featured in the article are convincing to me. If LLM's turn out to be less worthwhile at scale than many people assume, the inevitabilist interpretation is another dream of AI summer.
Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.
It doesn't have to, it just has to outperform humans on some tasks. Often really simple tedious tasks. Take this whole discussion out of the context
of AI and instead look at it as
just a machine. Machines outperform humans at all sorts of things, that's why we built them. Your dishwasher outperforms a human because it can wash your dishes on demand
for pennies while a human taking that task from you would demand $15/hr. My car vastly outperforms my legs at getting me places. Email outperforms the USPS for correspondence. I don't know if there's a limit to the side of HN comments but I could hit it if I continued. These LLMs are a new tool that can perform tasks autonomously that couldn't be done before, and that's super cool. For some reason people get swept up in the mystique of it all and expect them to replace a human, body and soul, at every task which is kinda silly when discussing an advanced washing machine.
I mentionned this in an other comment, but I wholeheartedly agree - tools are great.
I've used a sandblasting machine to do in 3 minutes what would've taken me easily two days of hand scraping. With maybe 4 mins of training. It's absolutely undeniable how efficient the tool is.
Can you show me something concrete that's comparable with AI/LLMs?
In the order of 900-1000x efficiency - like comparing your legs to your car, you know?
It doesn’t have to be as performant nor fast.
It can work and iterate alone when setup properly. All time spent is purely bonus.
It is already inevitable.
I agree that I get a lot of value out of LLMs. But I also have to clean up after them a lot of the time. It's a far cry from being able to replace a skilled developer working on something non-trivial.
If <something> then it's inevitable, otherwise it's not? What exactly do you think "inevitable" means? If it depends on something, then by definition it is not inevitable.
Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.