My most recent example of this is mentoring young, ambitious, but inexperienced interns.
Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:
- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.
- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.
- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.
- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.
How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems
> - Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be.
This reminded me of a quarter million dollar software project one of my employers had contracted to a team in a different country. On the face of it - especially if you go and check by the spec sheet - everything was there but the thing was not a cohesive whole. They did not spend one second beyond the spec sheet and none of the common sense things that "follow" from the spec were there. The whole thing was scrapped immediately.
With LLMs this kind of work now basically becomes free to do and automatic.
I'm expecting to see so much more poor quality software being made. We're going to be swimming in an ocean of bad software.
Good experienced devs will be able to make better software, but so many inexperienced devs will be regurgitating so much more lousy software at a pace never seen before, it's going to be overwhelming. Or as the original commenter described, they're already being overwhelmed.
> Good experienced devs will be able to make better software
I lowkey disagree. I think good experienced devs will be pressured to write worse software or be bottlenecked by having to deal with bad software. Depends on company and culture of course. But consider that you as expereinced dev now have to explain things that go completely over the head of the junior devs, and most likely the manager/PO, so you become the bottleneck, and all pressure will come down on you. You will hear all kinds of stuff like "80% there is enough" and "dont let perfect be the enemy of good" and "youre blocking the team, we have a deadline" and that will become even worse. Unless you're lucky enough to work in a place with actually good engineering culture.
I think the recent post about the Cloudflare engineer who built an OAuth implementation, https://news.ycombinator.com/item?id=44159166, shows otherwise (note the Cloudflare engineer, kentonv, comments a bunch in the discussion). The author, who is clearly an expert, said it took him days to complete what would have taken him weeks or months to write manually.
I love that thread because it clearly shows both the benefits and pitfalls of AI codegen. It saved this expert a ton of time, but the AI also created a bunch of "game over" bugs that a more junior engineer probably would have checked in without a second thought.
There was also a review of that code about a week later [0] which highlights the problems with LLM-generated code.
Even looking strictly at coding, the hard thing about programming is not writing the code. It is understanding the problem and figuring out an elegant and correct solution, and LLM can't replace that process. They can help with ideas though.
> There was also a review of that code about a week later [0] which highlights the problems with LLM-generated code.
Not really. This "review" was stretching to find things to criticize in the code, and exaggerated the issues he found. I responded to some of it: https://news.ycombinator.com/item?id=44217254
Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.
Thanks for responding. I read that dude's review, and it kind of pissed me off in an "akshually I am very smart" sort of way.
Like his first argument was that you didn't have a test case covering every single MUST and MUST NOT in the spec?? I would like to introduce him to the real world - but more to the point, there was nothing in his comments that specifically dinged the AI, and it was just a couple pages of unwarranted shade that was mostly opinion with 0 actual examples of "this part is broken".
> Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.
Couldn't agree more, which is why I really appreciated the fact that you went to the trouble to document all of the prompts and make them publicly available.
Thank you for answering, I haven't seen your rebuke before. It does seem that any issues, even if there would be any (your arguments about CORS headers sound convincing to me, but I'm not an expert on the subject - I study them every time I need to deal with this) were not a result of using LLM but a conscious decision. So either way, LLM has helped you achieve this result without introducing any bugs that you missed and Mr. Madden found in his review, which sounds impressive.
I won't say that you have converted me, but maybe I'll give LLMs a shot and judge for myself if they can be useful to me. Thanks, and good luck!
You can certainly make the argument that this demonstrates risks of AI.
But I kind of feel like the same bug could very easily have been made by a human coder too, and this is why we have code reviews and security reviews. This exact bug was actually on my list of things to check for in review, I even feel like I remember checking for it, and yet, evidently, I did not, which is pretty embarrassing for me.
You touch on exactly the point that I try to make to the AI-will-replace-XXX-profession crowd: You have to already be an expert in XXX to get the most out of AI. Cf. Gell-Mann Amnesia.
I'm showing my age, but this is almost exactly analogous to the rise of Visual Basic in the late nineties.
The promise then was similar: "non-programmers" could use a drag-and-drop, WYSIWYG editor to build applications. And, IMO, VB was actually a good product. The problem is that it attracted "developers" who were poor/inexperienced, and so VB apps developed a reputation for being incredibly janky and bad quality.
The same thing is basically happening with AI now, except it's not constrained to a single platform, but instead it's infecting the entire software ecosystem.
We turned our back on VB. Do we have the collective will to turn our back on AI? If so I suspect it’ll take a catalyzing event for it to begin. My hunch tells me no, no we don’t have the will.
We didn't turn our back on VB. Microsoft killed it when it became a citizen of the .NET ecosystem; pairing C# concepts, requiring extensive code changes and an IDE that was read-only during debug (yah, you couldn't edit the code while debugging) killed the product.
Greed (wanting an enterprise alternative to Java and C++ builder) killed VB, not the community.
Fwiw I honestly think it was a mistake to turn our back on vb.
Yes there were a lot of crappy barely functioning programs made in it. But they were programs that wouldn’t have existed otherwise. Eg. For small businesses automating things vb was amazing and even if the program was barely functional it was better than nothing.
When the Derecho hit Iowa and large parts of my area were without power for over a week we got to discover just how many of our very large enterprise processes were dependent to some degree on "toy" apps built in "toy" technologies running on PCs under people's desks. Some of it clever but all of it fragile. It's easy to be a strong technical person and scoff at their efforts. Look how easily it failed! But it also ran for years with so few issues it never rose to IT's attention before a major event literally took the entire regional company offices offline. It caused us some pain as we had to relocate PCs to buildings with sufficient backup power. But overall the effort was far smaller than building all of those apps with the "proper" tools and processes in the first place.
Large companies can be a red tape nightmare for getting anything built. The process overload will kill simple non-strategic initiatives. I can understand and appreciate less technical people who grab whatever tool they can to solve their own problems when they run into blockers like that. Even if they don't solve it in the best way possible according to experts in the field. That feels like the hacker spirit to me.
Please don’t stop at building “toy” prototypes, it’s a great start, but take some time to iterate, rebuild, bring it to production standards, make it resilient and scalable.
You’d be surprised how little effort it is compared to having to deal a massive outage. E.g. You did eventually had to think about backup power.
I think we will need to find a way to communicate “this code is the result of serious engineering work and all tradeoffs have been thought about extensively” and “this code has been vibecoded and no one really cares”. Both sides of that spectrum have their place and absolutely will exist. But it’s dangerous to confuse the two
There's a simple way to communicate it. Just leave in the emoticons added in comments by the LLM.
Wrote it initially as a joke, but maybe it's not that dumb? I already do it on LinkedIn. I'm job hunting and post slop from time to time to game LinkedIn algorithms to get better positioning among other potential candidates.
And not to waste anybody's time, I leave in the emotes at beginning of sentences just so people in the know know it's just slop (so as not to waste their time).
Interesting thought. Yeah.. the whole LLM-generated thing might end up being a boon. It is (reasonably) distinctive. At least for now. And rightly or wrongly it triggers defensive reflexes
Drag and drop GUI builders were awesome. Responsive layouts ruined GUI programming for me. It made it too much of a fuss to make anything "professional".
It's the exact same thing every time a technical bar is lowered and more people can participate in something. From having to manually produce your own film to having film processing readily available on demand to not needing to process film at all and everyone has a camera in their pocket. The number of people taking photos has absolutely exploded. The average quality of photos has to have fallen through the floor. But you've also got a ton of people who couldn't participate previously for one reason or another who go on to do great things with their new found capabilities.
Software is a very different beast though because this crappy technical debt lives on, it often grows "tentacles" with poorly defined boundaries, people and companies come to depend on it, and then the mess must eventually be cleaned up.
Take your photos example. Sure, the number of photos taken has exploded, but who cares if there are now reams and reams of crappy vacation photos - it's not like anyone is really forced to look at it.
With AI-generated code, I think it's actually awesome for small, individual projects. And in capable hands, they can be a fantastic productivity enhancer in the enterprise. But my heart bleeds for the poor sap who is going to eventually have to debug and clean up the mountains of AI code being checked in by folks with a few months/years of experience.
> , people and companies come to depend on it, and then the mess must eventually be cleaned up.
I have found time and again that enough technological advancement will make previously difficult things easy that when it's time to clean up the old stuff, it's not such a huge issue. Especially so if you do not need to keep a history of everything and can start fresh. This probably would not fly in a huge corp but it's fine for small/medium businesses. After all, whole companies disappear and somehow we live on.
There are ways to fight it though. Look at Linux kernel for instance - they have been overwhelmed with poor contributions long before LLMs. The answer is to maintain standards that put as much burden on the contributor as possible, and normalizing unapologetic "no" from reviewers.
Does that work as well with non-strangers who are your coworker? I'm not sure.
Also if you're organizationally changing the culture to force people to put more effort in writing the code, why are you even organizationally using LLMs...?
> Does that work as well with non-strangers who are your coworker?
Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.
> why are you even organizationally using LLMs
Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?
I don’t want to just dismiss the productivity increase. I feel 100% more productive on throw away POCs and maybe 20% more productive on large important code bases.
But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.
But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.
I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.
Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.
My point in that second question was: Is the human challenge of getting a lot of inexperienced engineers to fully understand the LLM output actually worth the time, effort and money to solve vs sticking to solving the technical problems that you're trying to make the LLM solve?
Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.
The change is already happening. People graduating now are largely "AI-first", and it's going to be even worse if you listen to what teachers tell. And management often welcomes it too. So you need to deal with it one way or another.
It's measurable in the number of times you have to spend >x minutes to help them go through something they should have written up by themselves. You can count the number of times you have to look at something and tell them "do it again, but without LLM this time". At some point you fire them.
My opinion on someone is how I decide whether I want to work with them and help them grow or fire them/wait for them to fail on their own merit (if somebody else is in charge of hiring/firing).
Yes, but some of us have seen this coming for a long time now.
I will have my word in the matter before all is said and done. While everyone is busy pivoting to AI I keep my head down and build the tools that will be needed to clean up the mess...
I'm building a universal DOM for code so that we should see an explosion in code whose purpose is to help clean up other code.
If you want to write code that makes changes to a tree of HTML nodes, you can pretty much write that code once and it will run in any web browser.
If you want to write code that makes a new program by changing a tree of syntax nodes, there are an incredible number of different and wholly incompatible environments for that code to run in. Transform authors are likely forced to pick one or two engines to support, and anyone who needs to run a lot of codemods will probably need to install 5-10 different execution engines.
Most people seem not to notice or care about this situation or realize that their tools are vastly underserving their potential just because we can't come up with the basic standards necessary to enable universal execution of codemod code, which also means there are drastically lower incentives to write custom codemods and lint rules than there could/should be
We're cleaning up the broken links as time goes on, but it is probably obvious to you from browsing around that some parts of the site are still very much under construction.
The JSX noise is CSTML, a data format for encoding/storing parse trees. It's our main product. E.g. a simple document might look something like `<*BooleanLiteral> 'true' </>`. It's both the concrete syntax and the semantic metadata offered as a single data stream.
The easiest way to consume a CSTML document is to print the code stored in it, e.g. `printSource(parseCSTML(document))`, which would get you `true` for my example doc. Since we store all the concrete syntax printing the tree is guaranteed to get you the exact same input program the parser saw. This means you can use this to rearrange trees of source code and then print them over the original, allowing you to implement linters, pretty-printers, or codemod engines.
These CSTML documents also contain all the information necessary to do rich presentation of the code document stored within (syntax highlighting). I'm going to release our native syntax highlighter later today hopefully!
It's an immutable btree-based format for syntax trees which contain information both abstract and concrete. Our markup language for serializing the trees is Concrete Syntax Tree Markup Language, or CSTML.
> I'm expecting to see so much more poor quality software being made. We're going to be swimming in an ocean of bad software.
That's my expectation as well.
The logical outcome of this is that the general public will eventually get fed up, and there will be an industry-wide crash, just like in 1983 and 2000. I suppose this is a requirement for any overly hyped technology to reach the Plateau of Productivity.
> Good experienced devs will be able to make better software,
No, they won't. It's a race to the bottom.
I can take extra time to produce something that won't fall over on the first feature addition, that won't need to be rewritten with a new approach when the models get upgraded/changed/whatever and will reliably work for years with careful addition of new code.
I will get underbid by a viber who produced a turd in an afternoon, and has already spent the money from the project before the end of the week.
AWS itself is currently polluting their online documentation with GenAI generated snippets...I can only imagine what horrors lurk on their internal code base. In a move similar to the movie War Games, maybe humans are now out of the loop, and before a final commit LLMs are deciding....
Honestly, I expect LLM’s or the combination of algorithms that make them usable (Claude Code), to get better fast enough that we’ll never reach that phase. All the good devs know what the current problem with LLM assisted coding are, and a lot of them are working to mitigate and/or fix those problems.
I dealt with a 4x as expensive statement-of-work fixed price contract that was nearshored and then subbed out to a revolving cast of characters.
The SOW was so poorly specified that it was easy to maliciously comply with it, and it had no real acceptance tests. As a result legal didn't think IT would have a leg to stand on arguing with the vendor on the contract, and we ended up constantly re-negotiating on cost for them to make fixes just to get a codebase that never went live.
An example of how bad it was - imagine you have a database of metadata to generate downloader tasks in a tool like airflow, but instead of doing any sane groupings of say the 100 sources with 1000 files each every day into a 100ish tasks, it generated a 700,000 task graph because its gone task-per-file-per-day.
We were using some sort of SaaS dag/scheduler tool at the time and if we deployed we'd have been using 5x more tasks than the entire decades-old, 200 person person were using to date, and paid for it.
Or they implemented the file arrival SLA checker such that it only alerted when a late file arrived. So if a file never arrives it never alerts. Or when a daily file arrives a week late, you get the alert on arrival, not a week ago when it was late.
I have seen the revolving cast of characters bit play out several times. It’s as if they hire 1 or 2 competent people and rotate them to face the client that is currently screaming the loudest.
To be fair though, in your case it aounds like 51% (and maybe even 75+%) of the defect was in the specifications.
Oh yeah, 75-90% of the outcome was determined by the bad specification/contract.
You can have a loose spec and trust the team to do the right thing if it's an internal team you will allocate budget/time to iterate. Not if you have a fixed time & cost contract.
We do quality outsource development for usual web/mobile stuff (yeah, it exists).
80% of our job is helping clients to figure out what do they actually need and what's reasonable to implement given current state of tech, finding that balance between ideal and realistic software, or rather negotiating it.
So expecting client to write SOWs/specifications is like expecting client to write code.
Aha, actually, I've recently seen it quite few times: people send me detailed SOW which look good, but once I try to read them to actually create an understanding of the domain logic/program in my head — it does not make any sense.
Very close to the grand-grand-parent comment about mentoring junior programmers. Now imagine they are the one paying you!
I’d argue with software that the level of detail you need to specify to do a successful SOW is so much work you’d might as well then just do the dev work too.
It also cuts against all trends of iterative development in that it is like waterfall with a gun to your head to get the spec 1000% right.
I once saw something like that where there was an existing codebase and a different business unit in the company wanted to add a large new feature.
The contractors simply wanted to get paid, naturally. The people who paid them didn't understand the original codebase, and they did not communicate with the people who designed and built the original codebase either. The people who built the original code were overworked and saw the whole bruhaha as a burden over which they had no control.
It was a low seven figure contract. The feature was scrapped after two or three years while the original product lived on and evolved for many years after that.
I hope that management learned their lesson, but I doubt it.
As participant in many kinds of similar projects, lets put it this way, the crew already knows that the ship has a few holes while at the harbour, but captain decides for sailing anyway.
Eventually you will find yourself on deep waters, with the ship lower than it should be, routinely taking out buckets of water, whishing for the nearest island, only to repair ship with whatever is on that island, and keep sailing to the nearest one, with the buckets ready.
After a couple of enterprise projects, one learns it is either move into another business, or learn to cope with this approach.
Which might be specially trick given the job landscape on someone's region.
My suspicion is that all type of work is this; a universal issue where quality and forethought are at odds with quantity and good enough (where good enough trends towards worse over time).
Before SE I had a bunch of vastly different jobs and they all suffered from something akin to crab bucket mentality where doing a good job was something you got away with.
I've had jobs where doing the right thing was something you kept to yourself or suffer for it.
This almost seems to be a weird artefact of capitalism. Ive worked on several projects which at some point became obviously doomed to almost everybody in the trenches but management/investors/owners kept believing. Perception of reality did not permeate the class divide.
I wish I could make $$$ off this insight somehow but im not sure it's possible.
I think this is driven more by hierarchy and power games rather than capitalism. Basically, if your superiors don't want to hear bad news, then either you'll tell them good news only or you'll be replaced by someone who will.
Source: I've been replaced by this process a number of times.
Because hundreds of years after multiple things have proven that systems with free and open flow of information, skills, and techniques beats any system where information is walled off, capitalism (or corporatism really) still insists, all too often, on favoring feudal style top down methods of control instead of bottom up “empower the teams and facilitate the flow of information”; they don’t merely suppress the flow of information they actively ignore information; and they prefer people idle rather than work on non-approved priorities.
Many people use capitalism to mean the system of multinational corps and their secretive, hierarchical, and morally offensive ways, rather than anything to do with free market economics (which are predicated on the free flow of information in the market).
The trick with waterfall is that discovering issues is deferred until the very last phases of test and user acceptance, at which point it's too late to do anything.
Well said. That has been my experience as well, but from the perspective of using these tools on my own. Sure, I can now generate thousands of lines of code relatively quickly, but the hard part is actually reviewing the code to ensure that it does what I asked, fix bugs, hunt for security issues, refactor, simplify and remove code, and so on. I've found that it's often much more productive to write the code myself, and rely on the LLM for simple autocomplete tasks on the way. I imagine that this workflow would be much harder when you have to communicate with a less experienced human who will in turn need to translate it to an LLM, because of the additional layers of indirection.
I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether, or they never cared to do them in the first place. Then the burden for maintaining code quality is on the few who actually care, which has now grown much larger because of the amount of code that's thrown at them. Unfortunately, these people are often seen as pedants and sticklers who block PRs for no good reason. That sometimes does happen, but most of the time, these are the folks who actually care about the product shipped to users.
I don't have a suggestion for improving this, but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers trained on LLM use exclusively, and the companies who build these tools will keep promoting the same marketing BS because it builds hype, and by extension, their valuation.
> I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether
I think that's probably true, but I think there are multiple layers here.
There's what's commonly called vibe coding, where you don't even look at the code.
Then there's what I'd call augmented coding, where you generate a good chunk of the code, but still refactor and generally try to understand it.
And then there's understanding every line of it. For this, in particular, I don't believe LLMs speed things up. You can get the LLM to _explain_ every line to you, but what I mean is to look at documentation and specs to build your understanding and test out fine grained changes to confirm it. This is something you naturally do while writing code, and unless you type comically slow, I'm not convinced it's not faster this way around. There's a very tight feedback loop when you are writing and testing code atomically. In my experience, this prevents an unreasonable amount of emergencies and makes debugging orders of magnitude faster.
I'd say the bulk of my work is either in the second or the third bucket, depending on whether it's production code, the risks involved etc.
These categories have existed before LLMs. Maybe the first two are cheaper now, but I've seen a lot of code bases that fall into them - copy pasting from examples and SO. That is, ultimately, what LLMs speed up. And I think it's OK for some software to fall into these categories. Maybe we'll see too much fall into them for a while. I think eventually, the incredibly long feedback cycles of business decisions will bite and correct this. If our industry really flies off the handle, we tend to have a nice software crisis and sort it out.
I'm optimistic that, whatever we land on eventually, generative AI will have reasonable applications in software development. I personally already see some.
There is also the situation in which the developer knows the tools by heart and has ownership of the codebase, hence intuitively knows exactly what has to be changed and only needs to take action.
These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.
Personally, I feel like everything besides actually vibe coding + maybe sanity checking via a quick glance is a bad LLM application at this point in time.
Youre just inviting tech dept if you actually expect this code to be manually adjusted at a later phase. Normally, code tells a story. You should be able to understand the thought process of the developer while reading it - and if you can't, there is an issue. This pattern doesn't hold up for generated code, even if it works. If an issue pops up later, you'll just be scratching your head what this was meant to do.
And just to be clear: I don't think vibe coding is ready for current enterprise environments either - though I strongly suspect it's going to decimate our industry once tooling and development practices for this have been pioneered. The current models are already insanely good at coding if provided the correct context and prompt.
E.g. countless docs on each method defining use cases, force the LLM to backtrack through the code paths before changes to automatically determine regressions etc. Current vibe coding is basically like the original definition of a hacker: a person creating furniture with an Axe. It basically works, kinda.
> These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.
I feel like people are maybe underestimating the value of LLMs for some tasks. There's a lot of stuff where, I know how to do it but I can't remember the parameter order or the exact method name and the LLM absolutely knows. And I really get nothing out of trying to remember/look up the exact way to do something. Even when I do know, it often doesn't hurt to be like "can you give me a loop to replace all the occurrences of foo with bar in this array of strings" and I don't need to remember if it's string.replace(foo,bar), whether I need to use double or single quotes, if it's actually sub or gsub or whatever.
There's lots of tiny sub-problems that are totally inconsequential and an LLM can do for me, and I don't think I lose anything here. In fact maybe I take a little longer, I chat with the LLM about idioms a bit and my code ends up more idiomatic/more maintainable.
It kind of calls to mind something Steve Jobs said about how hotkeys are actually worse than using a mouse, and that keyboard users aren't faster, they just think they are. But using LLMs for these sorts of things feels similar in that, like using keyboard shortcuts, maybe it takes longer, but I can use muscle memory so I don't have to break flow, and I can focus on something else.
Asking the LLM for these sorts of trivial problems means I don't have to break flow, I can stay focused on the high-level problem.
> There's a lot of stuff where, I know how to do it but I can't remember the parameter order or the exact method name and the LLM absolutely knows. And I really get nothing out of trying to remember/look up the exact way to do something. Even when I do know, it often doesn't hurt to be like "can you give me a loop to replace all the occurrences of foo with bar in this array of strings" and I don't need to remember if it's string.replace(foo,bar), whether I need to use double or single quotes, if it's actually sub or gsub or whatever.
I mean, I kinda get it in more complicated contexts, but the particular examples you describe (not remembering method names and/or parameter orderings) have been solved for ages by any decent IDE.
Developers have always loved the new and shiny. Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.
LLM “vibe coding” is another continuation of this “new hotness”, and while the more seasoned developers may have learned to avoid it, that’s not the majority view.
CEOs and C-suites have always been disconnected from the first order effects of their cost-cutting edicts, and vibe coding is no different in that regard. They see the ten dollars an hour they spend on LLMs as a bargain if they can hire a $30 an hour junior programmer instead of a $150 an hour senior programmer.
They will continue to pursue cost-cutting, and the advent of vibe coding matches exactly what they care about: software produced for a fraction of the cost.
Our problem — or the problem of the professionals - is that we have not been successful in translating the inherent problems with the CEOs approach to a change in how the C-suite operates. We have not successfully pursuaded them that higher quality software = more sales, or lower liability, or lower cost maintenance, and that partially because we as an industry have eschewed those for “move fast and break things”. Vibe coding is “Move Fast and Break Things” writ large.
> Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.
This depends a lot on the "programming culture" from which the respective developers come. For example, in the department where I work (in some conservative industry) it would rather be a tough sell to use a new, shiny framework because the existing ("boring") technologies that we use are a good fit for the work that needs to be done and the knowledge that exists in the team.
I rather have a feeling that in particular the culture around web development (both client- and server-side parts) is very prone to this phenomenon.
In the Venn diagram of the programming culture of the companies that embrace vibe coding and the companies whose developers like to rewrite applications when a new framework comes out is almost a perfect circle, however.
In my experience, it was. And if we're getting real for a moment, the vast majority of programmers gets paid by a company that is, first and foremost, interested in making more money. IMHO all technical decisions are business decisions in disguise.
Can the business afford to ship something that fails for 5% of their users? Can they afford to find out before they ship it or only after? What risks do they want to take? All business decisions. In my CTO jobs and fractional CTO work, I always focused on exposing these to the CEO. Never a "no", always a "here's what I think our options and their risks and consequences are".
If sound business decisions lead to vibe coding, then there's nothing wrong with it. It's not wrong to loose a bet where you understood the odds.
And don't worry about businesses that make uniformed bets. They can get lucky, but by and large, they will not survive against those making better informed bets. Law of averages. Just takes a while.
I agree with your sentiment, but not with the conclusion.
Sure, technical decisions ultimately depend on a cost-benefit analysis, but the companies who follow this mentality will cut corners at every opportunity, build poor quality products, and defraud their customers. The unfortunate reality is that in the startup culture "move fast and break things" is the accepted motto. Companies can be quickly started on empty promises to attract investors, they can coast for months or years on hype and broken products, and when the company fails, they can rebrand or pivot, and do it all over again.
So making uninformed bets can still be profitable. This law of averages you mention just doesn't matter. There will always be those looking to turn a quick buck, and those who are in it for the long haul, and actually care about their product and customers. LLMs are more appealing to the former group. It's up to each software developer to choose the companies they wish to support and be associated with.
Tech and product are just small components in what makes the business profitable. And often not as central as we in our industry might _like_ to believe. From my perspective, building software is the easy, the fun part. Many bets made have nothing to do with the software.
And yes, there is enshittification, there is immoral actors. The market doesn't solve these problems, if anything, it causes them.
What can solve them? I have only two ideas:
1. Regulation. To a large degree this stops some of the worst behaviour of companies, but the reality in most countries I can think of is that it's too slow, and too corrupt (not necessarily by accepting bribes, also by wanting to be "an AI hub" or stuff like that) to be truly effective.
2. Professional ethics. This appears to work reasonably well in medicine and some other fields, but I have little hope our field is going to make strides here any time soon. People who have professional ethics either learn to turn it off selectively, or burn out. If you're a shady company, as long as you have money, you will find competent developers. If you're not a shady company, you're playing with a handicap.
It's not all so black and white for sure, so I agree with you that there's _some_ power in choosing who to work for. They'll always find talent if they pay enough, but no need to make it all too easy for them.
To play devils advocate for a second, the law of averages states nobody should ever found a startup. Or any business for that matter.
It’s rare that startups gain traction because they have the highest quality product and not because they have the best ability to package, position, and market it while scaling all other things needed to mane a company.
They might get acqui-hired for that reason, but rarely do they stand the test of time. And when they do, it almost always because founders stepped aside and let suits run all or most of the show.
Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s. Or the impact of trash version of commodified products flooding Amazon.
The volume here is orders of magnitude greater, but that’s the closest example I can think of.
> Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s.
Tech exec here. It is all about gamed metrics. If the board-observed metric is mean salary per tech employee, you'll get masses of people hired in india. In our case, we hire thousands in India. Only about 20% are productive, but % productive isnt the metric, so no one cares. You throw bodies at the problem and hope someone solves it. Its great for generations of overseas workers, many of whom may not have had a job otherwise. You probably have dozens of Soham Parekhs .
Western execs also like this because it inflates headcount, which is usually what exec comp is based on "i run a team of 150.." Their lieutenants also like it because they can say "i run a team of 30", as do their sub-lieutenants "i run a team of 6"
I'm a fractional RevOps consultant for a company for about 20 hours a week. They spend more for those 20 hours than they would if they filled the position full time, but they'd rather it this way because it shows up on a different line item and goes with their narrative of slashing headcount. Expect we'll see a lot more of this, particularly as everyone races to become the next "single-person unicorn startup."
>I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether, or they never cared to do them in the first place.
I think this follows a larger pattern of AI. It helps someone with enough maturity to not rely on it too blindly and enough foresight to know they still need to grow their own skills, but does well enough that those looking for an easy or quick answer is now given that tool that lets them skip doing more of the hard work. It empowers seniors (developer or senior level in unrelated fields) but traps juniors. Same as using AI to solve a math problem. Is the student verifying their own solution against the AI's, or copying and pasting while thinking they are learning by doing so (or even recognizing their aren't but not worrying about it since the AI can handle it and not realizing how this will trap them on ever harder problems in the future).
>...but rather a grim outlook that it's only going to get worse. The industry will continue to be flooded by software developers...
I somewhat agree, but even more grim, I think we are looking at this across many more fields than just software development. The way companies make use of this and the market forces at the corporate level might be different, but it is also impacting education and that alone should be enough to negatively impact other areas.
I think this is going to look a lot like the same problem in education, where the answer is that we will have to spend less time consuming written artifacts as a form of evaluation. I think effective code reviews will become more continuous and require much more checking in, asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.
AI creates the same problem for hiring too: it generates the appearance of knowledge. The problem you and I have as evaluators of that knowledge is there is no other interface to knowledge than language. In a way this is like the oldest philosophy problem in existence. Socrates spent an inordinate amount of time railing against the sophists, people concerned with language and argument rather than truth. We have his same problem, only now on an industrial scale.
To your point about tests, I think the answer is to not focus on automated tests at first (though of course you should have those eventually), but instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.
Evaluating written artifacts is broken in education because the end goal of education is not the production of written artifacts - it is the production of knowledge in someone’s mind and the artifacts were only intended to see if that knowledge transfer had occurred. Now they no longer provide evidence of that. A ChatGPT written essay about the causes of the civil war is not of any value to a history professor, since he does not actually need to learn about the civil war.
But software development is about producing written artifacts. We actually need the result. We care a lot less about whether or not the developer has a particular understanding of the world. A cursor-written implementation of a login form is of use to a senior engineer because she actually wants a login form.
I think it's both actually, and you're hitting on something I was thinking of while writing that post. I'm reading "The Perfectionists," which is about the invention of precision engineering. It had what I would consider three aspects, all of which we should care about:
1. The invention of THE CONCEPT BEHIND THE MACHINE. In our context, this is "Programming as Theory Building." Our programs represent some conception of the world that is NOT identical to the source code, much the way early precision tools embodied philosophies like interchangeability.
2. The building of the machine itself, which has to function correctly. To your point, this is one of the major things we care about, but I don't agree it's the only thing. In the code world this IS the code, to your point. When this is all we think about, though, I think you get spaghetti code bases and poorly trained developers.
3. Training apprentices in both the ideas and the craft of producing machines.
You can argue we should only care about #2, many businesses certainly incentivize thinking in that direction, but I think all 3 are important. Part of what makes coding and talking about coding tricky is that written artifacts, even the same written artifacts, express all 3 of these things and so matters get very easily confused.
This is a key difference, but I think it plays less of a role than it initially appears because growing knowledge of employees helps building better artifacts faster (and fixing them when things go wrong). Short term, the login form is desired. But long term, someone with enough knowledge to support the login form, for when the AI doesn't quite get it all right, is desired.
> instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.
There’s a reason no one does it. Because it’s inefficient. Even in recorded video format. The helpful things are tests and descriptives PRs. The former because its structure is simple enough that you can judge it, and the test run can be part of the commit. The second is for the simple fact that if you can write clearly about your solution, I can the just do a diff of what you told me and what the code is doing, which is way faster than me trying to divine both from the code.
> asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.
I claim that this approach is sustainable.
The idea behind the "I read all of your code and give feedback." methodology is that the writer really put a lot of deep effort into making sure that the code is of great quality - and then he is expecting feedback, which is often valuable. As long as you can with some effort find out by yourself how improvements could be done, don't bother asking for someone else's time/
The problem is thus that the writers of "vibe-generated code" hardly ever put such a deep effort into the code. Thus the code is simply not worth asking feedback for.
I think asking people to explain is good, but it's not scalable. I do this in interviews when I suspect someone is cheating, and it's very easy to see when they've produced something that they don't understand. But it takes a long time to run through the code, and if we had to do that for everything because we can't trust our engineers anymore that would actually decrease productivity, not increase it.
>This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves.
It's funny, I have the same problem, but with subject matter expertise. I work with internal PR people and they clearly have shifted their writing efforts to be AI-assisted or even AI-driven. Now I as the SME get these AI-written blog posts and press releases and I spend a far more time on getting all the hallucinations out of these texts.
It's an effort inversion, too - time spent correcting the PR-people's errors has tripled or quadrupled. They're supposed to assist me, not the other way around. I'm not the press release writer here.
And of course they don't 'learn' like your junior engineers - it's always AI, it's always different hallucinations.
P.S.: And yes I've raised this internally with our leadership - at this rate we'll have 50% of the PR people next year, they're making themselves unemployed. I don't need a middleman who's job it is to copy-paste my email into ChatGPT, then send me the output; I can do that myself.
Part of the solution is pushing back when you spot tons of obvious lazy LLM errors instead of fixing them yourself. Otherwise there's not much incentive for them to improve their effort.
Yes I've tried to have an internal standard for AI usage: at least the PR people have to tell us if they use AI. It completely changes how we approach editing of a text AI-written vs human-written (humans don't hallucinate citations, for a start).
Of course this is impossible to enforce, and I believe that the PR people would rather hide their AI usage. (As I wrote above why pay high salaries to people who automate themselves away?)
Edit: actually, that's the story of my life. I've been working for 20 years and every 5 years or so, stuff gets reshuffled so I have 3 more jobs instead of 1. It feels like I have 20 jobs by now, but still the same salary. And yes I've switched employers and even industries. I guess the key is to survive at the end of the funneling.
In the medium term I think you have to shift the work upstream to show that they've put in the labour to actually design the feature or the bug fix.
I think we've always had this mental model which needs to change that senior engineers and product managers scope and design features, IC developers (including juniors for simpler work) implement them, and then senior engineers participate in code review.
Right now I can't see the value in having a junior engineer on the team who is unable to think about how certain features should be designed. The junior engineer who previously spent his time spinning tires trying to understand the codebase and all the new technologies he has to get to grips with should instead spend that time trying to figure out how that feature fits into the big picture, consider edge cases, and then propose a design for the feature.
There are many junior engineers who I wouldn't trust with that kind of work, and honestly I don't think they are employable right now.
In the short term, I think you just need to communicate this additional duty of care to make sure that your pull requests are complete because otherwise there's an asymmetry of workload and judge those interns and juniors on how respectful of that they are.
I don't think the junior/senior distinction is useful in this case. All software engineers should care about the quality of the end product, regardless of experience. I've seen "senior" engineers doing the bare minimum, and "junior" engineers putting vastly more care into their work. Experience is something that is accrued over time, which gives you more insight into problems you might have seen before, but if there's no care about the product, then it's hardly relevant.
The issue with LLM tools is that they don't teach this. The focus is always on getting to the end result as quickly as possible, skipping any of the actually important parts of software development. The way problem solving is approached with LLMs is by feeding them back to the LLM, not by solving them yourself. This is another related issue: relying on an LLM doesn't give you software development experience. That is gained by actually solving problems yourself; understanding how the system works, finding the underlying root cause, fixing it in an elegant way that doesn't create regressions, writing robust tests to ensure it doesn't happen again, etc. This is the learning experience. LLMs can help with this, but they're often not used in this way.
I have a team that’s somewhat junior at a big company. We pretty much have everyone “vibe plan” significantly more than vibe code.
- you need to think through the product more, really be sure it’s as clarified as it can be. Everyone has their own process, but it looks like rubber ducking, critiquing, breaking work into phases, those into tasks, etc. (jobs to be done, business requirement docs, domain driven design planning, UX writing product lexicon docs, literally any and all artifacts)
- Prioritize setting up tooling and feedback loops (code quality tools of any and every kind, are required). this includes custom rules to help enforce anything you decided during planning. Spent time on this and life will be a lot better for everyone.
- We typically making very very detailed plans, and then the agents will “IVI” it (eg automatic linting, single test, test suite, manual evaluation).
You basically set up as many and as diverse of automatic feedback signals as you can.
—-
I will plan and document for 2-4 hours, then print a bunch of small “PRDs” that are like “1 story point” small. There’s clear definitions of done.
Doing this, I can pretty much go the gym or have meetings or whatever for 1-2 hours hands off.
I think this is a good use of AI. Change your thinking - the code is, and has always been, a medium between the computer and the human. Where is the human? Where do we define our intent? AI gives us a chance to redefine that relationship or at least make it more fluid.
A well-architected system is easier to develop and easier to maintain. It makes sense to put all the human effort into producing that because, lo and behold, both humans and LLMs can produce much better results within a well-defined structure.
Everyone is responsible for what they deliver. No one is shipping gluttonous CLs, because no one would review them. You still have to know and defend your work.
Not sure what to tell you otherwise. The code is much more thought through, with more tests, and better docs. There’s even entire workflows for the CI portion and review.
I would look at workflows like this as augmentation than automation.
What this actually means is that your manager gets a raise when the AI written code works, and you get fired when it inevitably breaks horribly. You also get fired if you do not use AI written code
1. Mostly written by LLMs, and only superficially reviewed by humans.
2. Written 50-50% by devs and LLMs. Reviewed to the same degree as now.
Software of type 2 will be more expensive and probably of higher quality. Type 1 software will be much much more common, as it will be cheaper. Quality will be lower, but the open question is whether it will be good enough for the use cases of cheap mass produced software. This is the question that is still unanswered by practical experience, and it's the question that all the venture capitalists a salivating about.
I 100% guarantee you there will be plenty of software still written fully by humans—and even more that's written 95% by humans, with minor LLM-based code autocomplete or boilerplate generation.
“We typically making very very detailed plans” - this is writing code in English without tests. Admittedly, since generating code is faster, you get faster feedback. Still, I do not think it as efficient as an incremental, test driven approach. Here you can optimize early on for the feedback loop.
You get faster feedback in code, but you won't know if it actually does what it's supposed to do until it's in production. I don't believe (but have no numbers) LLMs speed up the feedback loop.
You give up, approve the trash PRs, wait for it to blow up in production and let the company reap the rewards of their AI-augmented workforce, all while quietly looking for a different job or career altogether.
I wanted to add to your points that I think that there's a lack of understanding in architecture, which the previous generation has learned through refactoring and unit tests.
If LLMs will be able to write unit tests, this will get worse, because there will be no time spent reflecting about "what do I need" or "how can this be simplified". These are, in my opinion, how to characterize the differences between a Developer, Engineer, and Architect mindset. And LLMs / vibe coding will never develop actual engineers or architects, because they never can develop that mindset.
The easiest programming language to spot those architectural mistakes in is coincidentially the one with the least syntax burden. In Go it's pretty easy to discover these types of issues in reviews because you can check the integrated unit tests, which help a lot in narrowing down the complexities of code branches (and whether or not a branch was reached, for example).
In my opinion we need better testing/review methodologies. Fuzz testing, unit testing and integration testing isn't enough.
We need some kind of logical inference tests which can prove that code branches are kept and called, and allow to confirm satisfiabilities.
I guess answering "you obviously didn't write it, please redo" is not an option, because then you are the dinosaur hindering company's march towards the AI future?
You might make this easier by saying you just checked their code with your own AI system and then say it returned "you obviously didn't write it, please redo".
Honestly, I don't think it matters who wrote it; ultimately it's about the code and the product, not the individual author.
That said, a lazy contribution - substandard code or poorly LLM generated - just wastes your time if your feedback is just put into the LLM again. Setting boundaries then is perfectly acceptable, but this isn't unique to LLMs.
Struggling with the same issues with junior developers. I've been asking for an implementation plan and iterating on it. Typical workflow is to commit the implementation plan and review it as part of a pr. It takes 2-3 iterations to get right. Then the developer asks claude code to implement the based on the markdown. I've seen good results with this.
Another thing I do is ask for the claude session log file. The inputs and thought they provided to claude give me a lot more insight than the output of claude. Quite often I am able to correct the thought process when I know how they are thinking. I've found junior developers treat claude like a sms - small ambiguous messages with very little context, hoping it would perform magic. By reviewing the claude session file, I try to fix this superficial prompting behaviour.
And third, I've realized claude works best of the code itself is structured well and has tests, tools to debug and documentation. So I spend more time on tooling so that claude can use these tools to investigate issues, write tests and iterate faster.
Still a far way to go, but this seems promising right now.
I see this a lot and even done so myself, I think a lot of people in the industry are a bit too socially-aware and think if they start a discussion they look like they're trying too hard.
It's stupid yes, but plenty of times I've started discussions only to be brushed off or not even replied to, and I believed it was because my responses were too long and nobody actually cared.
I feel the same way; we use Gitlab in our day to day, and often I find myself writing a long reply after fixing a code review issue, describing what I changed, resources used, etc... then hitting the "resolve" button, which collapses the comment and unless the reviewer has enabled notifications and actually reads them, I doubt they would ever see my well thought-out response.
But then, for me, writing is a way to organize thought as well, plus these remarks will stay in the thread for future reference. In theory anyway, in practice it's likely they'll switch from Gitlab to something else and all comments will be lost forever.
Which makes me wish for systems that archive review remarks into Git somehow. I'm sure they exist, but they're not commonly used.
That's how it often works with offshore code. You get a huge pile of code that meets the spec, so not totally wrong, but with a lot of small issues that are hard to identify. And you as the senior dev are now in a bad situation: since project management has marked the task as "Done" already, you are the bad guy if you reject the code and ask for rework. At some point you are worn down by all the pressure and let the code through and you end up with a growing pile of questionable code that sort of works but requires a ton of maintenance and is hard to change. You can't win.
My only hope is that AI one day will be much better than humans in every aspect and produce super high quality code. I don't see why this wouldn't happen. The current tools are still primitive.
Sounds like several fundamental workflow issues that the LLM is perhaps exacerbating but need to be fixed either way.
One, they need to run their code. Make sure it works before submitting a PR. If someone submits code to me that does not work I don't care if it came from an LLM or not, go run your code and come back when it works. If they routinely refuse to run their code and never learn their lesson then I might suggest they find another profession... Or require they submit a video of the code working.
Second, going away and coming back with a totally different PR I give the feedback of "what happened to the code we were working on before? We didn't need all new code." As the senior my time is worth (a bit) more than the intern's so I don't hesitate to make their bad choices their problem. Come back when you've made a serious attempt and then we can discuss it.
Code review has become the new bottleneck,
since it’s the layer that prevents sloppy AI-generated code from entering the codebase.
One thing I do that helps clean things up before I send a PR is writing a summary. You might consider encouraging your peers to do the same.
## What Changed?
Functional Changes:
- New service for importing data
- New async job for dealing with z.
Non-functional Changes:
- Refactoring of Class X
- Removal of outdated code
It might not seem like much, but writing this summary forces you to read through all the changes and reflect. You often catch outdated comments, dead functions left after extractions, or other things that can be improved—before asking a colleague to review it.
It also makes the reviewer’s life easier, because even before they look at the code, they already know what to expect.
Ha. Almost always when I see PRs with such summaries I can assume that both the summary and the code has been AI-generated.
PRs in general shouldn't require elaborate summaries. That's what commit messages are for. If the PR includes many commits where a summary might help, then that might be a sign that there should be multiple PRs.
Granted, it is not only summaries that go into the description—how to test, if there is any pre-deploy or post-deploy setup, any concerns, external documentation, etc.
Less is more. A summary serves to clarify, not to endlessly add useless information.
⸻
2. about the usefulness of summaries.
Summaries always provide better information—straight to the point—than commits (which are historical records). This applies to any type of information.
When you’re reporting a problem by going through historical facts, it can lead to multiple narratives, added complexity, and convoluted information.
Summaries that quickly deliver the key points clearly and focus only on what’s important offer a better way to communicate.
If the listener asks for details, they already have a clear idea of what to expect. A good summary is a good introduction to what you are going to see in the commits messages and in the code changes.
______________________
3.About multiple Prs.
Summary helps to clarify what is scope creep (be it a refactor or unrelated code to the ticket);
it make it easier for the reviewer demand a split in multiple PRs.
examples:
A non-summary PR/MR might lead to the question—“WHY is this code here?"
"he touched a class here, was he fixing something that the test missed out ? or is just a refactor?"
_______________
As a reviewer you can get those information by yourself, although summary helps you to get it much quicker.
> A non-summary PR/MR might lead to the question—“WHY is this code here?"
This is precisely what a (good) commit message should answer.
Commits are historical records, sure, but they can include metadata about the change, which should primarily explain why the change was made, what tradeoffs were made and why, and any other pertinent information.
This is useful not just during the code review process, but for posterity whenever someone needs to understand the codebase, while bisecting, etc. If this information is only in the PR, it won't be easy to reference later.
FWIW I'm not against short summaries in PRs that are exceptionally tricky to understand. The PR description is also useful as a living document for keeping track of pending tasks. But in the majority of cases, commit messages should suffice. This is why GitHub, and I'm sure other forges as well, automatically fill out the PR title and description with the commit information, as long as there's only one commit, which is the ideal scenario. For larger PRs, if it doesn't make sense to create multiple, I usually just say "See the commits for details".
Depends on the business logic, sometimes summaries (or a short demo explanation) help a lot to understand the made tradeoffs, so the reviewer can contribute more without spending too much time. It is especially helpful if the part is somewhat isolated.
In theory this makes sense, but in practice now that LLMs are writing the PR summaries we just have even more slop to wade through to figure out exactly what the change is trying to achieve. I think the slide in this direction already started with exhaustive PR templates that required developers to fill in a whole bunch of fluff just to open their PR. The templates didn't make bad developers good, it just caused them to produce more bad content for review.
My experience with LLM-generated summaries is the same as it was with the templates: many complete them in a way that is entirely self-referential and lacking in context. I don't need a comment or a summary to describe to me exactly the same thing I could have already understood by reading the code. The reason for adding English-language annotations to source code is to explain how a particular change solves a complex business problem, or how it fits into a long-term architectural plan, that sort of thing. But the kinds of people who already did not care about that high level stuff don't have the context to write useful summaries, and LLMs don't either.
The worst thing I've seen recently is when you push for more clarity and context on the reasons behind a change, and then that request gets piped into an LLM. The AI subsequently invents a business problem or architectural goal that in reality doesn't exist and then you get a summary that looks plausible in the abstract, and may even support the code changes it is describing, but it still doesn't link back to anything the team or company is actually trying to achieve, and that costs the reviewer even more time to check. AI proponents might say "well they should have fed the team OKRs and company mission/vision/values into the LLM for context" but then that defeats the point of having the code review in the first place. If the output is performative and not instructive, then the whole process is a waste of time.
I am not sure what the solution is, although I do think that this is not a problem that started with LLMs, it's just an evolution of a challenge we have always faced - how to deal with colleagues who are not really engaged.
Simply require from the junior developers that each pull request has to satisfy a very high standard. If they are not sure about something, they may ask, but if they send you some pull request of bad quality to review, and you find something, they deserve a (small) tantrum.
It is likely not possible to completely forbid junior developers from using AI tools, but any pull request that they create that contains (AI-generated) code that they don't fully comprehend (they can google) will be rejected (to test this, simply ask them some non-trivial questions about the code). If they do so, again, these junior developers deserve a (small) tantrum.
The thing is that a "very high standard" is not a measurable criterion. The project can have test coverage requirements and strict linting to catch basic syntax and logic problems, but how do you enforce simplicity, correctness, robustness, or ergonomics? These are abstract concepts that are difficult to determine, even for experienced developers, so I wouldn't expect less experienced developers to consider them. A code review process is still important, with or without LLMs.
So we can ask everyone using these tools to understand the code before submitting a PR, but that's the best we can do. There's no need to call anyone out for not meeting some invisible standard of quality.
I work alone, not in teams, but use LLM (codex-1) a lot, and it's extremely helpful. I accepted that in return the code base is much lower quality than if I would have written it.
What works for me is that after having lots of passing tests, I start refactoring the tests to get closer to property testing: basically prove that the code works by allowing it to go through complex scenarios and check that the state is good in every step instead of just testing lots of independent cases. The better the test is, the harder LLMs are able to cheat.
I wonder how this trade-off will age.
I'm not a Mag7/Saas/SV startup tech guy, so I've tended to work on systems that are in service & maintained for upwards of 10 years. It's not unusual to see 20 year old codebases in my field.
We scoff at clever code thats hard to understand leading to poor ability for teams to maintain, but what about knowingly much lower quality code?
When the price of building becomes low, you just toss it and build more.
Much like Ikea's low cost replaceable furniture has replaced artisan, hand made furniture and cheap plastic toys have replaced finely made artifacts. LLM produced code is cheap and low effort; meant to be discarded.
In recognizing this, then it should be used where you have this in mind. You might still buy a finely made sofa because it's high touch. But maybe the bookshelves from Ikea are fine.
> - Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This kind of thing drove me mad even before LLMs or coding - it started at school when I helped people with homework. People would insist on switching to an entirely different approach midway through explaining how to fix the first one.
I've found "write it, then ask the chatbot for a code review" to be a good pattern. You have to be judicious about what you accept, but it's often good at tidying things up or catching corner cases I didn't consider. Reading your comment, it occurs to me that a junior could get into a lot of trouble with this pattern.
I don’t give my interns green field projects, and they are usually hack jobs like get A working with B, which means they can’t really rely on LLMs to do much of the coding, and must instead must try, run the test, adjust, try again. More like junior investigators who happen to write some code I guess. I imagine this is extremely group-specific though.
For junior devs, it’s about the same, I’m assigning hack jobs, because most of what we need to do are hack jobs. The code really isn’t the bottleneck in that case, the research needed to write the code is.
The original approach was to be a surgeon and minimally cut the code to save the patient (the PR). You need to change your thinkong to realize the architecture of the prompt was wrong. Talk in abstractions and let them fully revise the PR, like "this should be refactored to reraise errors to the calling function" instead of pinpointing single lines.
In other words, we need to code review the same way we interact with LLMs - point to the overarching flaw and request a reroll.
> the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be
I don't understand, if they don't test the code they write (even if manually) it's not an LLM issue, it's a process one.
They have not been taught what does it mean to have a PR ready for being reviewed, LLMs are irrelevant here.
You think about the implementation and how it can fail. If you don’t think about the implementation, or don’t understand the implementation, I would argue that you can earnestly try to test, but you won’t do a good job of it.
The issue of LLMs here is the proliferation of people not understanding the code they produce.
Having agents or LLMs review and understand and test code may be the future, but right now they’re quite bad at it, and that means that the parent comment is spot on; what I see right now is people producing AI content and pushing the burden of verification and understanding to other people.
> The issue of LLMs here is the proliferation of people not understanding the code they produce.
Let's ignore the code quality or code understanding: these juniors are opening PRs, according to the previous user, that simply do not meet the acceptance criteria for some desired behavior of the system.
This is a process, not tools issue.
I too have AI-native juniors (they learned to code along copilot or cursor or chatgpt) and they would never ever dare opening a PR that doesn't work or does not meet the requirements. They may miss some edge case? Sure, so do I. That's acceptable.
If OP's are, they have not been taught that they have to ask for feedback when their version of the system does what it needs to.
> pushing the burden of verification and understanding to other people.
Where was the burden prior to LLM's?
if a junior cannot prove his/her code as working and have an understanding, how was this "solved" before llm? Why can't the same methods work post-llm? Is it due to volume? If a junior produces _more_ code they don't understand, it doesn't give them the right to just skip PR/review and testing etc.
If they do, where's upper management's role here then? The senior should be bringing up this problem and work out a better process and get management buy-in.
>> If you don’t think about the implementation, or don’t understand the implementation, I would argue that you can earnestly try to test, but you won’t do a good job of it.
Previously the producers of the code were competent to test it independently.
This increasingly, to my personal observation, appears to no longer be the case.
They do test it, they just dont think about it deeply and so they do a shit job of testing it, and an incompetent job of writing tests for it.
Not by being lazy; smart diligent folk doing a bad job because they didn't actually understand what needed to be tested, and tested some irrelevant trivial happy path based on the requirements not the implementation.
Thats what LLMs give you.
Its not a process issue; its people earnestly thinking they've done a good job when they havent.
Testing is often very subtle. If you don't understand changes you made (or really didn't make because the LLM did them for you), you don't know how they can subtly break other functionality that also depends on it. Even before LLM's, this was a problem for juniors, as they would change some code, it would build, it would work on their feature, but it would break something else which was seemingly unrelated. Only if you understand what your code changes actually "touch", you know what to (manually or automatically) test.
This is of course especially significant in codebases that do not have strict typing (or any typing at all).
Writing a test requires you to actually know what you're trying to build, and understanding that often requires the slow cooking of a problem that an LLM robs from you. I think this is less of a problem when you've already been thinking deeply on the domain / codebase for a long time. Not true for interns, new hires, interns.
I agree, normally the process (especially of manual testing) is a cultural thing and something you instill into new devs when you get broken PRs - "please run the tests before submitting for review", or "please run the script in staging, here's the error I got: ...".
Catching this is my job, but it becomes harder if the PR actually has passing tests and just "looks" good. I'm sure we'll develop the culture around LLMs to make sure to teach new developers how to think, but since I learned coding in a pre-LLM world, perhaps I take a lot of things for granted. I always want to understand what my code does, for example - that never seemed optional before - but now it seems to get you much further than just copy-pasting stuff from Stack Overflow ever did.
PITA or senior developer that's too senior for that company? Honestly I think an organization has no say in discussions about testing or descriptive PRs, and on the other side, a decent developer does not defer to someone higher-up to decide on the quality of their work.
Some managers will do anything for velocity, even if the direction is towards a cliff with with sharp rocks below. You try to produce quality work and others are doing tornado programming all over the codebase and be praised for it.
The industry does not care what engineers think because engineers have no power at all. We will not be able to hold any line against a management class that has been assured that AI tooling doubles engineer productivity and have zero ability to judge that claim, because they've never built anything and run into the exact problems that AI tools cause; A lack of understanding. They don't even know what they don't know.
If you wanted software engineers to be able to hold any sort of quality line against a few trillion dollars worth of AI investment, we needed to unionize or even form a guild twenty years ago.
> - During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
Would you mind drilling down into this a bit more? I might be dealing with a similar problem and would appreciate if you have any insight
The "good catch" thing is something I do, too, but mostly for short review comments like "this will blow up if x is null" etc.
I had to think a bit about it, but when it feels off it can be something like:
- I wrote several paragraphs explaining my reasoning, expecting some follow-up questions.
- The "fix" didn't really address my concerns, making it seem like they just said "okay" without really trying to understand. (The times when the whole PR is replaced makes it seem like my review was also just forwarded to the LLM, haha)
- I'm also comparing to how I often (especially earlier in my career) thought a lot about how to solve things, and when I got constructive feedback it felt pretty rewarding - and I could often give my own reasoning for why I did things a certain way. Sometimes I had tried a bunch of the things that the reviewer suggested, leading to a more lively back-and-forth. This could just be me, of course, or a cultural thing, but my expectation also comes from how other developers I've worked with react to my reviews.
Does that make sense? I'd be interested in hearing more about the problem you're dealing with. If this is not the right place, feel free to send an email :)
I had this experience even before LLMs, in particular when working with developers who came up in a non-western educational environment. There was a mindset that the only thing that matters is making the boss happy, and in a code review context the reviewer plays the role of boss, so the mindset is "do whatever is required for the boss to stop complaining", not "how can I learn from the knowledge this person is sharing". It's a fundamental difference in how people relate to one another professionally, and I think LLMs have spread this kind of attitude into broader cultural contexts - the devaluation of critical thinking and learning as a necessary part of the job and a more mercenary focus on uncritically churning out whatever the boss asked for.
The doomer perspective would be that people are getting dumber and more complacent and that this will unravel society, but that might not actually be the case if we consider that the mindset already existed in other societies that still thrive. Perhaps the people who never really gave a crap about the quality of their work were right all along? After all, despite the fact most of us are in the top 20% of earners in our countries and easily the top 10% or an even more elite minority globally, end of the day we are still "code peasants" who build whatever our boss told us to build so that an ultra-wealthy investor class can compound their wealth. Why should we waste our time caring about that? Why not get an AI to grind out garbage on our behalf? Why not focus our energies on more rewarding pursuits in our personal lives?
Of course I am playing devil's advocate here, because for me personally being forced to show up for work every day thanks to capitalism and then doing a half-assed job makes me more depressed than trying to excel at something I never wanted to do in the first place. But there is a part of me that understands the mindset and wonders if my life might be easier if I shared it.
Anyway, prior to LLMs I dealt with this phenomenon by reluctantly accepting that most people don't care anywhere near as much about the quality of their work as I do, and that it was hopeless trying to change them. Find the few who do care and prioritize actually-productive knowledge exchanges with them. Drop your standards for people who clearly don't care. If the code doesn't meet your standards but it's still more-or-less functional, just let it go. You might imagine it'll reflect poorly on you, except in reality management doesn't care anyway - the push to AI all the things right now is the "mask off" moment. Every now and then you'll still find a motivated junior who really is passionate about getting better and then being a part of their growth is still rewarding.
Basically the juniors just ask the LLM for an explanation of what the problem is and then fix what the LLM interprets your review to be talking about.
The way that you solve this is that you pull your junior into a call and work them through your comments one by one verbally, expecting them to comprehend the issues every time.
This is exactly my experience. Plus documentation is no longer being read because the LLM already generated the code, so the juniors don’t even know what to check before handing in their PR
Somehow interesting how this is similar to other uses of ML driven tools, like electronics engineering where solutions would be near impossible to understand for experienced engineers.
Have them first write a "code spec" in the repo with all the interfaces defined and comments that describe the behaviors.
"""
This is the new adder feature. Internally it uses chained Adders to multiply:
Adder(Adder(Adder(x, y), y), ...)
"""
class Adder:
# public attributes x and y
def __init__(self, x: float, y: float) -> None:
raise NotImplementedError()
def add(self) -> float:
raise NotImplementedError()
class Muliplier:
# public attributes x and y
# should perform multiplication with repeated adders
def __init__(self, x: float, y: float) -> None:
raise NotImplementedError()
def multiply(self) -> float:
raise NotImplementedError()
This is a really dumb example (frankly something Claude would write), but it illustrates that they should do this for external interfaces and implementation details.
For changes, you'd do the same thing. Specify it as comments and "high level" code ("# remove this class and switch to Multiplier") etc.
A mix, but a majority Ruby, with some shell scripts and Terraform.
My gut feeling is that it would generalize to typed languages, Go, Erlang, even Haskell etc, but maybe some of them make life easier for the reviewer in some ways? What are your thoughts on that?
I've worked with some junior developers and my experience has been the same as yours. We work primarily with typed languages. With junior developers I see two ways it goes: either they write code that does not take advantage of the type system, typing everything as basic string or numeric types. On the other end of the spectrum, those who go to the extremes where they build a complicated type hierarchy with abstractions more complicated than the problem space requirements
> Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
I didn't expect this initially but I am seeing it a ton at work now and it is infuriating. Some big change lands in my lap to review and it has a bunch of issues but they can ultimately be worked out. Then kaboom it is an entirely different change that I need to review from scratch. Usually the second review is just focused on the edits that fixed the comments from my first review. But now we have to start all over.
The human PR/code review needs to be abandoned. I'm not sure how or what will replace it. Some kind of programmatic agent review/test loop, contractual code that meets SLAs, vertical slice architecture, microservices (shudder)...
I see these posts on HN from time to time. Do you really think your code was any better when you were an intern? Mine was god awful. They probably binned all my work after each internship! I don't think an LLM makes them any worse.
Here is the crazy part: As a nearly neckbeard, there were no code reviews or PRs in my era. And mostly zero unit tests.
Thanks for this insides. I am curious and want to know: Is it also a 'good catch, I'll fix that' when you pair program or mob? Or better, did you notice any differences in behavior and issues while pair or mob programming with juniors (instead of using pull requests)?
Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:
- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.
- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.
- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.
- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.
How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems