> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.
The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.
> And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
This feels very true - but also consider how much code exists for which many of the current maintainers were not involved in the original writing.
There are many anecdotal rules out there about how much time is spent reading code vs writing. If you consider the industry as a whole, it seems to me that the introduction of generative code-writing tools is actually not moving the needle as far as people are claiming.
We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.
What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
The difference is the hope of getting out of that situation. If you've inherited a messy and incoherent code base, you recognize that as a problem and work on fixing it. You can build an understanding of the code through first reading and then probably rewriting some of it. This over time improves your ability to reason about that code.
If you're constantly putting yourself back into that situation through relegating the reasoning about code to coding agent, then you won't develop a mental model. You're constantly back at Day 1 of having to "own" someone else's code.
The key point is "relegating the reasoning". The real way to think about interfacing with LLMs is "abstraction engineering". You still should fully understand the reasoning behind the code. If you say "make a form that captures X, Y, Z and passes it to this API" you relegate how it accomplishes that goal and everything related to it. Then you look at the code and realize it doesn't handle validation (check the reasoning), so you have it add validation and toasts. But you are now working on a narrower level of abstraction because the bigger goal of "make a user form" has been completed.
Where this gets exhausting is when you assume certain things that you know are necessary but don't want to verify - maybe it let's you submit an email form with no email, or validates password as an email field for some reason, etc. But as LLMs improve their assumptions or you manage context correctly, the scale tips towards this being a useful engineering tool, especially when what you are doing is a well-trodden path.
I find this to be too rosy a story about using agentic coding to add to a codebase. In my experience, miss a small detail about the code and the agent may can go out of control creating a whole new series of errors that you wouldn’t have had to fix. And even if you don’t miss a detail, the agent eventually forgets because of the limited context window.
This is why I’ve constrained my use of AI agents to mostly “read-only and explain” use cases, but I have very strict conditions for letting it write. In any case, whatever productivity gains you supposedly “get” for its write scenarios, you should be subtracting your expenses to fix its output later and/or payments made for a larger context window or better reasoning. It’s usually not worth the trouble to me when I have plenty of experience and knowledge to draw from and can write the code as it should be myself.
So there’s another force at work here that to me answers the question in a different way. Agents also massively decrease the difficulty of coming into someone else’s messy code base and being productive.
Want to make a quick change or fix? The agent will likely figure out a way to do it in minutes rather the than hours it would take me to do so.
Want to get a good understanding of the architecture and code layout? Working with an agent for search and summary cuts my time down by an order of magnitude.
So while agree there’s a lot more “what the heck is this ugly pile of if else statements doing?” And “why are there three modules handling transforms?”, there is a corresponding drop in cost to adding features and paying down tech debt. Finding the right balance is a bit different in the agentic coding world, but it’s a different mindset and set of practices to develop.
In my experience this approach is kicking the can down the road. Tech debt isn't paid down, it's being added to, and at some point in the future it will need to be collected.
When the agent can't kick the can any more who is going to be held responsible? If it is going to be me then I'd prefer to have spent the hours understanding the code.
This is actually a pretty huge question about AI in general
When AI is running autonomously, where is the accountability when it goes off the rails?
I'm against AI for a number of reasons, but this is one of the biggest. A computer cannot be held accountable therefore a computer must never make executive decisions
The accountability would be in whoever promoted it. This isn't so much about accountability, as it is who is going to be responsible for doing the actual work when AI is just making a bigger mess.
The accountability will be with the engineer that owns that code. The senior or manger that was responsible for allowing it to be created by AI will have made sure they are well removed.
While an engineer is "it" they just have to cross their fingers and hope no job ending skeletons are resurrected until they can tag some other poor sod.
> In the current times you’re either an agent manager or you’re in for a surprise.
This opinion seems to be popular, if only in this forum and not in general.
What I do not understand is this;
In order to use LLM's to generate code, the engineer
has to understand the problem sufficient enough to
formulate prompt(s) to use in order to get usable
output (code). Assuming the engineer has this level
of understanding along with knowledge of the target
programming language and libraries used, how is using
LLM code generation anything more than a typing saver?
The point is an engineering manager is using software engineers as typing savers, too. LLMs are, for now, still on an exponential curve of capability on some measures (e.g. task duration with 50% completion chance is doubling every ~7 months) and you absolutely must understand the paradigm shift that will be forced upon you in a few years or you'll have a bad time. Understanding non-critical code paths at all times will simply be pointless; you'll want to make sure test coverage is good and actually test the requirements, etc.
> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
Messy codebases made by humans are known to be a bad thing that causes big problems for software that needs to be maintained and changed. Much effort goes into preventing them and cleaning them up.
If you want to work with AI code systems successfully then you better apply these exact same efforts. Documentation, composition, validation, evaluation, review and so on.
You don't have much coding experience, do you. Everybody has a unique coding style that is pretty consistent across codebase, within given language lets say. Even juniors unless they all took it from either stackoverflow or llms (so same source).
Regardless how horrible somebody else's code is, there is some underlying method, or logic reflecting how given person forms mental model of the problem and breaks it down to little manageable pieces. You can learn that style, over time even ignoring it and seeing their code in same ways you see yours. llm code has none of that, if yes its by pure chance that won't repeat.
> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
In my experience, the type of messes created by humans and the type of messes created by genAI are immensely different, and at times require different skill sets to dissect.
>We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.
In 1969 as a newly hired graduate working for the largest construction company in the country, one of my first assignments was to read through a badly formatted COBOL source code listing on paper, line by line, with 2 others. Each of us had a printout of a different version of the software, trying to discover where exactly the three versions were giving different outputs. Plus ça change, plus c'est la même chose
> What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?
Very unskilled programmers can use generative AI to create complex code that is hard to understand.
Unskilled programmers on their own write simpler code with more obvious easy to solve mistakes.
On the other hand, all companies I have worked for tried to avoid have unmaintained code (with different levels of success). AI tech debt seems to be added on purpose pushed by upper management.
> We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.
We also live in a world where people argue endlessly about how we don't need to write documentation or how it's possible to write self documenting code. This is where we spend so much of our time and yet so few actually invest in the efforts to decrease that time.
I bring this up because it's a solution to what you're pointing out as a problem and yet the status quo is to write even messier and harder to understand code (even before AI code). So I'm just saying, humans are really good at shooting themselves in the foot and blaming it on someone else or acting like the bullet came out of nowhere.
> What's the difference between
More so, I must get misreading because it sounds like you're asking what's the difference between "messy" and "messier"?
If it's the same level of messiness, then sure, it's equal. But in a real world setting there's a continuous transition of people. One doesn't work on code in isolation, quit, and then a new person works on that code also in isolation. So maybe it's not the original authors but rather the original authors are a Ship of Theseus. Your premise isn't entirely accurate and I think the difference matters
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
In any of my teams with moderate to significant code bases, we've always had to lean very hard into code comments and documentation, because a developer will forget in a few months the fine details of what they've previously built. And further, any org with turnover needs to have someone new come in and be able to understand what's there.
I don't think I've met a developer that keeps all of the architecture and design deeply in their mind at all times. We all often enough need to go walk back through and rediscover what we have.
Which is to say... if the LLM generator was instead a colleague or neighboring team, you'd still need to keep up with them. If you can adapt those habits to the generative code then it doesn't seem to be a bit leap.
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
Why? Code has always been the artifact. Thinking about and understanding the domain clearly and solving problems is where the intrinsic value is at (but I'd suspect that in the future this, too, will go away).
Code is the final artifact after everything is shipped. But while the development is active, it is more than that (at least for now), as you need to know implementation details even if you are really proficient at the domain knowledge.
Although I do agree that there is a possibility that we'll build a relatively reliable abstraction using LLMs at some point, so this issue will go away. There probably be some restrictions, but I think it is possible.
Code isn't an "artifact", it's the actual product that you are building and delivering. You can use flowery language and pontificate about the importance of the problem domain if you like, but at the end of the day we are producing a low level sequences of instructions that will be executed by a real world device. There has always been, and likely will always be, value in understanding exactly what you are asking the computer to do
I'm familiar with "artifact" being used to describe the inconsequential and easy to reproduce output of some deterministic process (e.g. build artifact). Even given the terminology you provide here it doesn't change the content of my point above.
When I see someone dismissing the code as a small irrelevant part of the task of writing software, it's like hearing that the low-level design and physical construction of a bridge is an irrelevant side-effect of my desire to cross a body of water. Like, maybe that's true in a philosophical sense, but at the end of the day we are building a real-world bridge that needs to conform to real-world constraints, and every little detail is going to be important. I wouldn't want to cross a bridge built by someone who thinks otherwise.
In most domains, code is not the actual product. Data is. Code is how you record, modify and delete data. But it is ultimately data that has meaning and value.
This is why we have the idiom: “Don’t tell me what the code says—show me the data, and I’ll tell you what the code does.”
Reminds me of critisms of python decades ago. that you wouldn't understand what the "real code" was doing since you were using a scripting language. But then over the years it showed tremendous value and many unicorns were built by focusing on higher level details and not lower level code
Comparing LLMs to programming languages is a fake equivalence. I don’t have to write assembly because LLVM will do that for me correctly in 100% of the cases, while AI might or might not (especially the more I move away from template crud apps)
That is a myth, cpu time is time spent waiting around by your users as the cpu is taking seconds to do something that could be instant, if you have millions of users and that happens every day that quickly adds up to many years worth of time.
It might be true if you just look at development cost, but if you look at value as a whole it isn't. And even just development cost its often not true, since time spent waiting around by the developer for tests to run and things to start also slows things down, taking a bit of time there to reduce cpu time is well worth it just to get things done faster.
Yeah, it's time spent by the users. Maybe it's an inneficiency of the market because the software company doesn't feel the negative effect enough, maybe it really is cheaper in aggregate that doing 3 different native apps in C++. But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction
Maybe we should, all it took was Figma taking it seriously and working at a lower level to make every other competitor feel awful and clunky next to it then it went on to dominate the market.
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
Many of us do frequently argue for something similar. Take a look at Casey Muratori’s performance aware programming series if you care about the arguments.
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
That is an extreme case though, I didn't mean that all optimizations are always worth it, but if we look at marginal value gained from optimizations today the payback is usually massive.
It isn't done enough since managers tend to undervalue user and developer time. But users don't undervalue user time, if your program wastes their time many users will stop using it, users are pretty rational about that aspect and prefer faster products or sites unless they are very lacking. If a website is slow a few times in a row I start looking for alternatives, and data says most users do that.
I even stopped my JetBrains subscription since the editor got so much slower in an update, so I just use the one I can keep forever as I don't want their patched editor. If it didn't get slower I'd gladly keep it as I liked some of the new features, but it being slower was enough to make me go back.
Also, while managers can obvious agree that making developer spend less time waiting is a good thing, it is very rare for managers to tell you to optimize compilation times or such, and pretty simple optimizations there can often make that part of the work massively faster. Like, if you profile your C++ compiler and look what files it spends time compiling, then look at those files to figure out why its so slow there, you can find these weird things and fixing those speeds it up 10x, so what took 30 seconds now takes 3 seconds, that is obviously very helpful and if you are used to that sort of thing you could do it in a couple of hours.
No, I wouldn't. That would require me be proficient in this, and I am not, so I am pretty sure I would not get to write better assembly optimisations unless I actually became better in that.
The difference is that there is no point (I know or would encounter) in which a compiler would not actually be able to do the job, and I would need to write manual assembly to fix some parts that the compiler could not compile. Yes a proficient programmer could probably do that to optimise the code, but the code would run and do the job regardless. That is not the case for LLMs, there is a non-zero changeyou get to the point of LLM agents getting stuck and it makes more sense to get hands dirty than iterating with agents.
That's not the same thing. LLMs don't just obscure low-level technical implementation details like Python does, they also obscure your business logic and many of its edge cases.
Letting a Python interpreter manage your memory is one thing because it's usually irrelevant, but you can't say the same thing about business logic. Encoding those precise rules and considering all of the gnarly real-world edge cases is what defines your software.
There are no "higher level details" in software development, those are in the domain of different jobs like project managers or analysts. Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession. Our jobs won't morph into something different - this is our job.
>There are no "higher level details" in software development, those are in the domain of different jobs like project managers or analysts. Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession. Our jobs won't morph into something different - this is our job.
I'm the non-software type of Engineer. I've always kind of viewed code as a way to bridge mathematics and control logic.
When I was at university I was required to take a first year course called "Introduction to Programming and Algorithms". It essentially taught us how to think about problem solving from a computer programming perspective. One example I still remember from the course was learning how you can use a computer solve something like Newton's Method.
I don't really hear a lot of software people talk about Algorithms but for me that is where the real power of programming lives. I can see some idealized future where you write programs just by mix and matching algorithms and almost every problem becomes essentially a state machine. To move from state A to State B I apply these transformations which map to these well known algorithms. I could see an AI being capable of that sort of pattern matching.
the hard thing is to define what State A and State B means Also to prepare for State C and D, so that it doesn’t cost more to add to the mix. And to find that State E everyone is failing to mention,…
> "Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession."
One-shotting anything like this is a non-starter for any remotely complex task. The reason is that fuzzy language is ambiguous and poorly defined. So even in this scenario you enter into a domain where it's going to require iterative cycling and refinement. And I'm not even considering the endless meta-factors that further complicate this, like performance considerations depending on how you plan to deploy.
And even if language were perfectly well defined, you'd end up with 'prompts' that would essentially be source codes in their own right. I have a friend who is rather smart, but not a tech type - and he's currently working on developing a very simple project using LLMs, but it's still a "real" project in that there are certain edge cases you need to consider, various cross-functionality in the UI that needs to be carried out, interactions with some underlying systems, and so on.
His 'prompt' is gradually turning into just a natural language program, of comparable length and complexity. And with the amount of credits he's churning through making it, in the end he may well have been much better off just hiring some programmers on one of those 'gig programming' sites.
------
And beyond all of this, even if you can surmount these issues - which I think may be inherently impossible - you have another one. The reason people hire software devs is not because they can't do it themselves, but because they want to devote their attention to other things. E.g. - most of everybody could do janitorial work, yet companies still hire millions of janitors. So the 'worst case' scenario would be that you dramatically lower the barriers to entry to software development, and wages plummet accordingly.
But working with AI isn’t really a higher level of abstraction. It’s a completely different process. I’m not hating on it, I love LLMs and use em constantly, but it doesn’t go assembly > C > python > LLMs
It would be a higher level of abstraction if there wouldn't be a need to handhold LLMs. You'd just let one agent build the UI, another the backend, just like a human would (you wouldn't validate their entire body of work, including their testing, documentation).
At that point yeah, a project manager would be able to build everything.
>The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
You can describe what the code should do with natural language.
I've found that using literate programming with agent calls to write the tests first, then code, then the human refining the description of the code, and going back to 1 is surprisingly good at this. One of these days I'll get around to writing an emacs mode to automate it because right now it's yanking and killing between nearly a dozen windows.
Of course this is much slower than regular development but you end up with world class documentation and understanding of the code base.
I can imagine an industry where we describe business rules to apply to data in natural language, and the AI simply provides an executable without source at all.
The role of the programmer would then be to test if the rules are being applied correctly. If not, there are no bugs to fix, you simply clarify the business rules and ask for a new program.
I like to imagine what it must be like for a non technical business owner who employees programmers today. There is a meeting where a process or outcome is described, and a few weeks / months / years a program is delivered. The only way to know if it does what was requested is to poke it a bit and see if it works. The business owner has no metal modal of the code and can't go in and fix bugs.
update: I'm not suggesting I believe AI is anywhere near being this capable.
I cant imagine that yet. Programmers to-date cannot reliably achieve such an outcome, so how would a LLM achieve it? We can’t even agree a definition or determine a system for “business rules”?
The programming building blocks popular today (lines of code in sub-routines and modules) do not support such a jump?
Not really, its more a case of "potentially can" rather than "will". This dynamic has always been there with the whole junior, senior dev. split, its not a new problem. You 100% can use it without losing this, in an ideal world you can even go so far as to not worry about the understanding for parts that are inconsequential.
>> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
All code is temporary and should be treated as ephemeral. Even if it lives for a long time, at the end of the day what really matters is data. Data is what helps you develop the type of deep understanding and expertise of the domain that is needed to produce high quality software.
In most problem domains, if you understand the data and how it is modeled, the need to be on top of how every single line of code works and the nitty-gritty of how things are wired together largely disappears. This is the thought behind the idiom “Don’t tell me what the code says—show me the data, and I’ll tell you what the code does.”
It is therefore crucial to start every AI-driven development effort with data modeling, and have lots of long conversations with AI to make sure you learn the domain well and have all your questions answered. In most cases, the rest is mostly just busywork, and handing it off to AI is how people achieve the type of productivity gains you read about.
Of course, that's not to say you should blindly accept everything the AI generates. Reading the code and asking the AI questions is still important. But the idea that the only way to develop an understanding of the problem is to write the code yourself is no longer true. In fact, it was never true to begin with.
What is "understanding code", mental model of the problem? These are terms for which we all have developed a strong & clear picture of what they mean. But may I remind us all that used to not be the case before we entered this industry - we developed it over time. And we developed it based on a variety of highly interconnected factors, some of which are e.g.: what is a program, what is a programming language, what languages are there, what is a computer, what software is there, what editors are there, what problems are there.
And as we mapped put this landscape, hadn't there been countless situations where things felt dumb and annoying, and then situation in sometimes they became useful, and sometimes they remained dumb? Something you thought is making you actively loosing brain cells as you're doing them, because you're doing them wrong?
Or are you to claim that every hurdle you cross, every roadblock you encounter, every annoyance you overcome has pedagogical value to your career? There are so many dumb things out there. And what's more, there's so many things that appear dumb at first and then, when used right, become very powerful. AI is that: Something that you can use to shoot yourself in the foot, if used wrong, but if used right, it can be incredibly powerful. Just like C++, Linux, CORS, npm, tcp, whatever, everything basically.
> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.
So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.
> There's literally nothing about the process that forces you to skip understanding.
There's nothing about C that "forces" people to write buffer overflows. But, when writing C, the path of least resistance is to produce memory-unsafe code. Your position reminds me of C advocates who say that "good developers possess the expertise and put in the effort to write safe code without safeguards," which is a bad argument because we know memory errors do show up in critical code regardless of what a hypothetical "good C dev" does.
If the path of least resistance for a given tool involve using that tool dangerously, then it's a dangerous tool. We say chefs should work with sharp knives, but with good knife technique (claw grip, for instance) safety is the path of least resistance. I have yet to hear of an LLM workflow where skimming the generated code is made harder than comprehensively auditing it, and I'm not sure that such a workflow would feel good or be productive.
Your point of view assumes the best of people, which is naive. It may not force you to skip understanding, however it makes it much easier to than ever before.
People tend to take the path of least resistance, maybe not everyone, maybe not right away, but if you create opportunities to write poor code then people will take them - more than ever it becomes important to have strong CI, review and testing practices.
Edit: okay, maybe I am feeling a little pessimistic this morning :)
People will complain about letting the LLM code because you won't understand every nuance. Then they will turn around and pip install a dependency without even glancing at the underlying code.
> No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side
This is the whole point. The marginal dev will go to the path of least resistance, which is to skip the understanding and churn out a bunch of code. That is why it's a problem.
You are effectively saying "just be a good dev, there's literally nothing about AI which is stopping you from being a good dev" which is completely correct and also missing the point.
The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
To add to the above - I see a parallel to the "if you are a good and diligent developer there is nothing to stop you from writing secure C code" argument. Which is to say - sure, if you also put in extra effort to avoid all the unsafe bits that lead to use-after-free or race conditions it's also possible to write perfect assembly, but in practice we have found that using memory safe languages leads to a huge reduction of safety bugs in production. I think we will find similarly that not using AI will lead to a huge reduction of bugs in production later on when we have enough data to compare to human-generated systems. If that's a pre-existing bias, then so be it.
> The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
My mental model of it is that coding with LLMs amplified both what you know and what you don't.
When you know something, you can direct it productively much faster to a desirable outcome than you could on your own.
When you don't know something, the time you normally would have spent researching to build a sufficient understanding to start working on it can be replaced with evaluating the random stuff the LLM comes up with which oftentimes works but not in the way it ought to, though since you can get to some result quickly, the trade-off to do the research feels somehow less worth it.
Probably if you don't have any idea how to accomplish the task you need to cultivate the habit of still doing the research first. Wielding it skillfully is now the task of our industry, so we ought to be developing that skill and cultivating it in our team members.
I don't think that is a problem with AI, it is a problem with the idea that pure vibe-coding will replace knowledgeable engineers. While there is a loud contingent that hypes up this idea, it will not survive contact with reality.
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
Learning the ropes looks different now. You used to learn by doing, now you need to learn by directing. In order to know how to direct well, you have to first be knowledgeable. So, if you're starting work in an unfamiliar technology, then a good starting point is read whatever O'Reilly book gives a good overview, so that you understand the landscape of what's possible with the tool and can spot when the LLM is doing (now) obvious bullshit.
You can't just Yolo it for shit you don't know and get good results, but if you build a foundation first through reading, you will do a lot better.
Totally agreed, learning the ropes is very different now, and a strong foundation is definitely needed. But I also think where that foundation lies has changed.
My current project is in a technical domain I had very little prior background in, but I've been getting actual, visible results since day one because of AI. The amazing thing is that for any task I give it, the AI provides me a very useful overview of the thing it produces, and I have conversations with it if I have further questions. So I'm building domain knowledge incrementally even as I'm making progress on the project!
But I also know that this is only possible because of the pre-existing foundation of my experience as a software engineer. This lets me understand the language the AI uses to explain things, and I can dive deeper if I have questions. It also lets me understand what the code is doing, which lets me catch subtle issues before they compound.
I suppose it's the same with reading books, but books being static tend to give a much broader overview upfront, whereas interacting with LLMs results in a much more focused learning path.
So a foundation is essential, but it can now be much more general -- such as generic coding ability -- but that only comes with extensive hands-on experience. There is at least one preliminary study showing that students who rely on AI do not develop the critical problem solving, coding and debugging skills necessary to be good programmers:
On vibe coding being self-correcting, I would point to the growing number of companies mandating usage of AI and the quote "the market can stay irrational longer than you can stay solvent". Companies routinely burn millions of dollars on irrational endeavours for years. AI has been promised as an insane productivity booster.
I wouldn't expect things to calm down for a while, even if real-life results are worse. You can make excuses for underperformance of these things for a very long time, especially if the CEO or other executives are invested.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real
I hate to say it but that's never going to happen :/
I'm a bit cynical at this point, but I'm starting to think these AI mandates are simply another aspect of the war of the capital class on the labor class, just like RTO. I don't think the execs truly believe that AI will replace their employees, but it sure is a useful negotiation lever. As in, not just an excuse to do layoffs but also a mechanism to pressure remaining employees: "Before you ask for perks or raises or promotions, why are you not doing more with less since you have AI? You know that soon we could replace you with AI for much cheaper?"
At the same time, I'll also admit that AI resistance is real; we see it in the comments here for various reasons -- job displacement fears, valid complaints about AI reliability, ethical opposition, etc. So there could be a valid need for strong incentives to adopt it.
Unfortunately, AI is also deceptively hard to use effectively (a common refrain of mine.) Ideally AI mandates would come with some structured training tailored for each role, but the fact that this is not happening makes me wonder about either the execs' competency or their motives.
We most definitely should, especially so if you're working in a team or organization bigger than a handful of people. Because it's almost certain that you may need to change or interact with that code very soon in the lifetime of the project. When that happens you want to make sure the code aligns with your own mental model of how things work.
The industry has institutionalized this by making code reviews a very standard best practice. People think of code reviews mainly as a mechanism to reduce bugs, but turns out the biggest benefits (born out by studies) actually are better context-sharing amongst the team, mentoring junior engineers, and onboarding of new team-mates. It ensures that everyone has the same mental model of the system despite working on different parts of it (c.f. the story of the blind men and the elephant.) This results in better ownership and fewer defects per line of code.
Note, this also doesn't mean everybody reviews each and every PR. But any non-trivial PR should be reviewed by team-mates with appropriate context.
AI is not my coworker, with different tasks and responsibilities.
The comparison is oniy reasonable if most of your job is spent trying to understand their code, and make sure it did what you wanted. And with them standing next to you, ready to answer questons, explain anything I don't understand and pull in any external, relevant parts of the codebase.
Of course not that's a bit disingenuous. I would hope my colleagues write code that is comprehensible so it's maintainable. I think that if the code is so complex and inscrutable that only the author can understand it then it's not good code. AI doesn't create or solve this problem.
I do think when AI writes comprehensible code you can spend as much time as necessary asking questions to better understand it. You can ask about tradeoffs and alternatives without offending anybody and actually get to a better place in your own understanding than would be possible alone.
Who are this endless cohort of develops who need to maintain a 'deep understanding' of their code. I'd argue a high % of all code written globally on any given day that is not some flavour of boilerplate, while written with good intention, is ultimately just short-lived engineering detritus of it even gets a code review to pass.
If you're on HN there's a good chance you've self-selected into "caring about the craft and looking for roles that require more attention."
You need to care if (a) your business logic requirements are super annoyingly complex, (b) you have hard performance requirements, or (c) both. (c) is the most rare, (a) is the most common of those three conditions; much of the programmer pay disparity between the top and the middle or bottom is due to this, but even the jobs where the complexity is "only" business requirements tend to be quite a bit better compensated than the "simple requirements, simple needs" ones.
I think there's a case to be made that LLM tools will likely make it harder for people to make that jump, if they want to. (Alternately they could advance to the point where the distinction changes a bit, and is more purely architectural; or they could advance to the point where anyone can use an LLM to do anything - but there are so many conditional nuances to what the "right decision" is in any given scenario there that I'm skeptical.)
A lot of times floor-raising things don't remove the levels, they just push everything higher. Like a cheap crap movie today will visually look "better" from a technology POV (sharpness, special effects, noise, etc) than Jurassic Park from the 90s, but the craft parts won't (shot framing, deliberate shifts of focus, selection of the best takes). So everyone will just get more efficient and more will be expected, but still stratified.
And so some people will still want to figure out how to go from a lower-paying job to a higher-paying one. And hopefully there are still opportunities, and we don't just turn into other fields, picking by university reputations and connections.
> You need to care if (a) your business logic requirements are super annoyingly complex, (b) you have hard performance requirements, or (c) both. (c) is the most rare
But one of the most fun things you can do is C: creative game development coding. Like coding world simulations etc, you want to be both very fast but the rules and interactions etc is very coupled and complex compared to most regular enterprise logic that is more decoupled.
So while most work programmers do fits A, the work people dream about doing is C, and that means LLM doesn't help you make fun things, it just removes the boring jobs.
In my experience the small percent of developers who do have a deep understanding are the only reason the roof doesn’t come crashing in under the piles of engineering detritus.
The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.