Only up to their quality limit though. This is a slightly different concept to a quantity limit (which also exists), but the general (imperfect) idea is that for your "quality level" (i.e. your ability ceiling), the only real knob you can dial is quantity. In practice, quantity seems to be a defining factor for pushing your ability ceiling higher.
When I checked, these were the top comments. Can't do anything these days ;)
- Menu is accessible but done badly, like navigating blind.
- Badly implemented cookie banner (let me opt out or don't use this)
- Why build an inferior multi-document interfaces (which are an anti-pattern)
- Waste of money - don't devs have better things to do
- Neat but runs like a dog. Give me SSG pages, otherwise make it good
- Nice website but no-one will use it the way they describe
- It's lovely <- followed up by: "I hate you"
- Websites like this have ultimately all been massive failures
- Awesome, but I have no idea what they do or what their product is
- Love it
- blah blah blah
> "If it's going to happen it will" - That is quite the defeatist attitude. Society becoming shittier isn’t inevitable
You're right in general, but I don't think that'll save you/us from OP's statement. This is simple economic incentives at play. If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing). The "from my cold dead hands" hand-coding crowd may be right, but they're a transitional historical footnote. Coding was always blue-collar white-collar work. No-one outside of coders will weep for what was lost.
> If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing).
On the scale I’ve been doing this (20 years), that hasn’t been the case.
Rails was massively more efficient for what 90% of companies where building. But it never had anywhere near a 90% market share.
It doesn’t take 1000 engineers to build CRUD apps, but companies are driven to grow by forces other than pure market efficiency.
There are still plenty of people using simple text editors when full IDEs have offered measurable productivity boosts for decades.
>(as we’re seeing)
I work at a big tech company. Productivity per person hasn’t noticeably increased. The speed that we ship hasn’t noticeably increased. All that’s happening is an economic downturn.
I think that you're correct in that Rails and IDEs offer significant productivity benefits but aren't/weren't widely adopted.
But AI seems to be different in that it claims to replace programmers, instead of augment them. Yes, higher productivity means you don't have to hire as many people, but with AI tools there's specifically the promise that you can get rid of a bunch of your developers, and regardless of truth, clueless execs buy the marketing.
Stupid MBAs at big companies see this as a cost reduction - so regardless of the utility of AI code-generation tools (which may be high!), or of the fact that there are many other ways to get productivity benefits, they'll still try to deploy these systems everywhere.
That's my projection, at least. I'd love to be wrong.
I suspect we'll find that the amount of technical debt and loss of institutional knowledge incured by misuse of these tools was initially underappreciated.
I don't doubt that the industry will be transformed, but that doesn't mean that these tools are a panacea.
I read about AI assistants allegedly creating tech debt but my experience is opposite. Claude Code makes it easy to refactor helping to reduce tech debt. Tech debt usually happens because refactoring takes time but is hard to justify to upper management because upper management only sees new features but not quality of code. With Claude Code refactoring is much faster so it gets done.
Are you talking about refactoring code you’re already familiar with? Or a completely unknown codebase that no one else at the company knows anything about and you’ve been tasked with fixing?
I would argue that you should only allow Claude to refactor code that you understand. Once that institutional knowledge is lost you would then have to regain it before you can safely refactor it, even with Claude's help.
I also specifically used the term "misuse" to significantly weaken my claim. I mean only to say that the risks and downsides are often poorly understood, not that there are no good uses.
Agent-in-a-loop gets you remarkably far today already. It's not straightforward to "rip" capability even when you have the code, but we're getting closer by the week to being able to go "Project X has capability Y. Use [$approach] and port this into our project". This HAS to put a fat question mark over the viability of any SaaS that makes their code visible.
It simply spoofs itself as Claude Code when calling the API. Anthropic will shut this down the second it benefits them to do so. Like much of the gravy train right now, enjoy it while it lasts.
I can second this cycle. Agentic code AI is an accelerant to this fire that sure looks like it's burning the bottom rungs of the ladder. Game theory suggests anyone already on the ladder needs to chop off as much of the bottom of the ladder as fast as possible. The cycle appears to only be getting.. vicious-er.
This ends in Idiocracy. The graybeards will phase out, the juniors will become staff level, except.. software will just be "more difficult". No-one really understands how it works, how could they? More importantly WHY should they? The Machine does the code. If The Machine gets it wrong it's not my fault.
The TRUE takeaway here is that as of about 12 months ago, spending time investing in becoming a god-mode dev is not the optimal path for the next phase of whatever we're moving into.
I'm afraid we already in the phase where regular devs have no idea how things work under the hood. So many web devs fail on the simple interview question "what happens when user enters a url and presses enter?" I would understand not knowing the details of DNS protocol, but not understanding the basics of what browser/OS/CPU is doing is just unprofessional.
And LLM assisted coding apparently makes this knowledge even less useful.
Met a dev who couldn't understand the difference between git, the program, and github, the remote git frontend.
I explained it a few times. He just couldn't wrap his head around that there were files on his computer and also on a different computer over the internet.
Now, I will admit distributed VCS can be tricky if you've never seen it before. But I'm not kidding - he legitimately did not understand the division of local vs the internet. That was just a concept that he never considered before.
He also didn't know anything about filesystems but that's a different story.
This seems like a common theme around very young computer users: Applications and operating systems have, for over a decade, deliberately tried to blur the line between "files on your computer" and "files in the cloud". Photo apps present you a list of photos, and deliberately hide what filesystem those photos are actually on. "They're just your photos. Don't think too much about where they are!" The end result is that the median computer user has no idea that files exist in some physical space and there is a difference between local and remote storage.
My kid struggles with this. She can't intuitively grasp why one app needs the Internet and another app does not. I try to explain it but it all goes over her head. The idea that you need the Internet when your app needs to communicate with something outside of the phone is just foreign to her: In her mind, "the app" just exists, and there's no distinction between stuff on the phone and stuff on the network.
"Low code quality keeps haunting our entire industry. That, and sloppy programmers who don't understand the frameworks they work within. They're like plumbers high on glue." (https://simple.wikiquote.org/wiki/Theo_de_Raadt)
This phase has been going on for decades now. It's a shame, really.
I don't think I agree with calling that question "simple". I could probably speak non-stop an entire hour before we even leave my local computer: electric impulses, protocol between keyboard and PC, BIOS, interruptions, ASCII and Unicode, OS, cache, types of local storage, CPU, GPU, window management and TCP stack, encryption... It's hard to come up with a computer-related field that's not somehow involved in answering that one question.
If anything, I always consider it a good question to assert whether someone knows when to stop talking.
I think it's one of those intentionally vague questions that helps in probing the knowledge depth. Interviewees are typically free to describe the process with as much detail as they can.
I think that's only true if you assume that the AI bubble will never burst.
Bitcoin didn't replace cash, Blockchain didn't replace databases and NoSQL didn't make SQL obsolete. And while I have been wrong before, I'm optimistic that AI will only replace programmers the same way copy-pasting from StackOverflow replaced programmers back in the day.
I think this is mostly true, but when it gets cheaper and faster, it will be able to complete much larger tasks unsupervised.
larger != more complex
The widespread adoption of cheap agentic AI will absolutely be an economic revolution. Millions of customer support jobs will be completely eliminated in the next few years, and that's just the beginning.
Soon it'll be easy to give an AI all the same things you give a new employee: an email address, a slack username, a web browser, access to the company intranet, a GitHub account, a virtual machine with mouse and keyboard control, etc. and you'll be able to swap it out one-for-one with pretty much any low-level employee.