> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.
So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.
> There's literally nothing about the process that forces you to skip understanding.
There's nothing about C that "forces" people to write buffer overflows. But, when writing C, the path of least resistance is to produce memory-unsafe code. Your position reminds me of C advocates who say that "good developers possess the expertise and put in the effort to write safe code without safeguards," which is a bad argument because we know memory errors do show up in critical code regardless of what a hypothetical "good C dev" does.
If the path of least resistance for a given tool involve using that tool dangerously, then it's a dangerous tool. We say chefs should work with sharp knives, but with good knife technique (claw grip, for instance) safety is the path of least resistance. I have yet to hear of an LLM workflow where skimming the generated code is made harder than comprehensively auditing it, and I'm not sure that such a workflow would feel good or be productive.
Your point of view assumes the best of people, which is naive. It may not force you to skip understanding, however it makes it much easier to than ever before.
People tend to take the path of least resistance, maybe not everyone, maybe not right away, but if you create opportunities to write poor code then people will take them - more than ever it becomes important to have strong CI, review and testing practices.
Edit: okay, maybe I am feeling a little pessimistic this morning :)
People will complain about letting the LLM code because you won't understand every nuance. Then they will turn around and pip install a dependency without even glancing at the underlying code.
> No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side
This is the whole point. The marginal dev will go to the path of least resistance, which is to skip the understanding and churn out a bunch of code. That is why it's a problem.
You are effectively saying "just be a good dev, there's literally nothing about AI which is stopping you from being a good dev" which is completely correct and also missing the point.
The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
To add to the above - I see a parallel to the "if you are a good and diligent developer there is nothing to stop you from writing secure C code" argument. Which is to say - sure, if you also put in extra effort to avoid all the unsafe bits that lead to use-after-free or race conditions it's also possible to write perfect assembly, but in practice we have found that using memory safe languages leads to a huge reduction of safety bugs in production. I think we will find similarly that not using AI will lead to a huge reduction of bugs in production later on when we have enough data to compare to human-generated systems. If that's a pre-existing bias, then so be it.
> The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
My mental model of it is that coding with LLMs amplified both what you know and what you don't.
When you know something, you can direct it productively much faster to a desirable outcome than you could on your own.
When you don't know something, the time you normally would have spent researching to build a sufficient understanding to start working on it can be replaced with evaluating the random stuff the LLM comes up with which oftentimes works but not in the way it ought to, though since you can get to some result quickly, the trade-off to do the research feels somehow less worth it.
Probably if you don't have any idea how to accomplish the task you need to cultivate the habit of still doing the research first. Wielding it skillfully is now the task of our industry, so we ought to be developing that skill and cultivating it in our team members.
I don't think that is a problem with AI, it is a problem with the idea that pure vibe-coding will replace knowledgeable engineers. While there is a loud contingent that hypes up this idea, it will not survive contact with reality.
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
Learning the ropes looks different now. You used to learn by doing, now you need to learn by directing. In order to know how to direct well, you have to first be knowledgeable. So, if you're starting work in an unfamiliar technology, then a good starting point is read whatever O'Reilly book gives a good overview, so that you understand the landscape of what's possible with the tool and can spot when the LLM is doing (now) obvious bullshit.
You can't just Yolo it for shit you don't know and get good results, but if you build a foundation first through reading, you will do a lot better.
Totally agreed, learning the ropes is very different now, and a strong foundation is definitely needed. But I also think where that foundation lies has changed.
My current project is in a technical domain I had very little prior background in, but I've been getting actual, visible results since day one because of AI. The amazing thing is that for any task I give it, the AI provides me a very useful overview of the thing it produces, and I have conversations with it if I have further questions. So I'm building domain knowledge incrementally even as I'm making progress on the project!
But I also know that this is only possible because of the pre-existing foundation of my experience as a software engineer. This lets me understand the language the AI uses to explain things, and I can dive deeper if I have questions. It also lets me understand what the code is doing, which lets me catch subtle issues before they compound.
I suppose it's the same with reading books, but books being static tend to give a much broader overview upfront, whereas interacting with LLMs results in a much more focused learning path.
So a foundation is essential, but it can now be much more general -- such as generic coding ability -- but that only comes with extensive hands-on experience. There is at least one preliminary study showing that students who rely on AI do not develop the critical problem solving, coding and debugging skills necessary to be good programmers:
On vibe coding being self-correcting, I would point to the growing number of companies mandating usage of AI and the quote "the market can stay irrational longer than you can stay solvent". Companies routinely burn millions of dollars on irrational endeavours for years. AI has been promised as an insane productivity booster.
I wouldn't expect things to calm down for a while, even if real-life results are worse. You can make excuses for underperformance of these things for a very long time, especially if the CEO or other executives are invested.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real
I hate to say it but that's never going to happen :/
I'm a bit cynical at this point, but I'm starting to think these AI mandates are simply another aspect of the war of the capital class on the labor class, just like RTO. I don't think the execs truly believe that AI will replace their employees, but it sure is a useful negotiation lever. As in, not just an excuse to do layoffs but also a mechanism to pressure remaining employees: "Before you ask for perks or raises or promotions, why are you not doing more with less since you have AI? You know that soon we could replace you with AI for much cheaper?"
At the same time, I'll also admit that AI resistance is real; we see it in the comments here for various reasons -- job displacement fears, valid complaints about AI reliability, ethical opposition, etc. So there could be a valid need for strong incentives to adopt it.
Unfortunately, AI is also deceptively hard to use effectively (a common refrain of mine.) Ideally AI mandates would come with some structured training tailored for each role, but the fact that this is not happening makes me wonder about either the execs' competency or their motives.
We most definitely should, especially so if you're working in a team or organization bigger than a handful of people. Because it's almost certain that you may need to change or interact with that code very soon in the lifetime of the project. When that happens you want to make sure the code aligns with your own mental model of how things work.
The industry has institutionalized this by making code reviews a very standard best practice. People think of code reviews mainly as a mechanism to reduce bugs, but turns out the biggest benefits (born out by studies) actually are better context-sharing amongst the team, mentoring junior engineers, and onboarding of new team-mates. It ensures that everyone has the same mental model of the system despite working on different parts of it (c.f. the story of the blind men and the elephant.) This results in better ownership and fewer defects per line of code.
Note, this also doesn't mean everybody reviews each and every PR. But any non-trivial PR should be reviewed by team-mates with appropriate context.
AI is not my coworker, with different tasks and responsibilities.
The comparison is oniy reasonable if most of your job is spent trying to understand their code, and make sure it did what you wanted. And with them standing next to you, ready to answer questons, explain anything I don't understand and pull in any external, relevant parts of the codebase.
Of course not that's a bit disingenuous. I would hope my colleagues write code that is comprehensible so it's maintainable. I think that if the code is so complex and inscrutable that only the author can understand it then it's not good code. AI doesn't create or solve this problem.
I do think when AI writes comprehensible code you can spend as much time as necessary asking questions to better understand it. You can ask about tradeoffs and alternatives without offending anybody and actually get to a better place in your own understanding than would be possible alone.
>In that context, the answer in this case is to simply start talking about your project and showing it to people and asking for feedback (as you have done), and be conscious that what you're looking for is signals of user interest -- little sparks that you can convert into tiny flames so that you can start a fire.
So all of this text just to tell him to do what he's already been doing?
>The hypothetical is irrelevant here; what is germane is that the expectation of privacy by the individual participants, and the terms which bind people who use that service.
How can you have an expectation of privacy in a public forum? Where did this bizarre disorder originate, where people knowingly put their writing out there for literally anyone to read, then turn around and start talking about "expectations of privacy" when they realize what it entails?
Well unfortunately it originated in the human condition, my friend.
I take it back about "expectation of privacy". Perhaps that is an outmoded concept.
Humans used to sort of have a default expectation of privacy. Being that gossip, slander and libel were sins and crimes, we could often safely gather in a room and isolate ourselves in a select group, and share our thoughts openly.
Most humans could go into a living room with their family, a pub or bar, a classroom, or a treehouse, and say/do things that were shared only by the local group of gathered humans. You could go into a public park and speak to a fire hydrant. It was not usual, or possible 100 years ago, for the news media to go around with recorders and cameras and record/preserve/transmit/broadcast everything everyone said in every place they were doing it.
Expectations of privacy were just sort of... humankind's default setting. And so betrayals were sins and crimes. And we sit alone at our keyboard looking at a screen. It feels private, all right. Where are we really? Where are our words being carried? We can't know anymore.
Unfortunately we've built online and virtual worlds around paradigms that imply privacy or confidentiality, but don't actually afford it. You can go into a "chat room" or a "forum" or change your "privacy settings" but they mean nothing. Nothing at all. Because everything we're sending across the net can be perfectly recorded, preserved, retransmitted, and it's no longer gossip, it's just business.
> Where did this bizarre disorder originate
I don't believe that any other living organism has had to deal with the complete and total collapse of "privacy" like humans in the 21st century. Surely, termites in Australia don't know, and couldn't care, about what's going on with honeybees in California.
And here we have people calling it a bizarre disorder. Yes, it's mistaken and misguided, but who can call it unreasonable?
You produced a passive-aggressive taunt instead of addressing the argument.
For clarity: nobody was asking about your business decisions, nobody is intimidated by your story. what your personal opinions about "attitude" are is irrelevant to what's being discussed (LLMs allowing optimal time use in certain cases). Also, unless your boss made the firing decision, you weren't forced to do anything.
You’re still not getting it, not having a boss means I have a very different view of businesses decisions. Most people have an overly narrow view of tasks IMO and think speed, especially for minor issues, is vastly more important than it is.
> LLMs allowing optimal time use in certain case
I never said it was slower, I’m saying what’s the tradeoff. I’ve had this same basic conversation with multiple people, and after that failed the only real option is to remove them. Ex: If you don’t quite understand why what you wrote seemingly fixes a bug don’t commit it yet, seems to work isn’t a solution.
Could be I’m not explaining very well, but ehh fuck em.
> Leaning on an LLM to ease through those tough moments is 100% short circuiting the learning process.
Sounds like "back in my days" type of complaining. Do you have any evidence of this "100% reduction" or is it just "AI bad" bandwagoning?
> But you're definitely not learning how to write.
How would you know? You've never tested him. You're making a far-reaching assumption about someone's learning based on using an aid. It's the equivalent of saying "you're definitely not learning how to ride a bicycle if you use training wheels".
> Don’t let a computer write for you! I say this not for reasons of intellectual honesty, or for the spirit of fairness. I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.
Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.
> Three months later in April when this tagged data is used to train the next iteration, the AI can successfully learn that today's date is actually January 29th.
Such an ingenious attack, surely none of these companies ever considered it.
> Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users.
Because a website with lots of links is executable code. And the scrapers totally don't have any checks in them to see if they spent too much time on a single domain. And no data verification ever occurs.
Hell, why not go all the way? Just put a big warning telling everyone: "Warning, this is a cyber-nuclear weapon! Do not deploy unless you're a super rad bad dude who totally traps the evil AI robot and wins the day!"
No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.
So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.