AI accelerated two groups of folks. Beginners/naive and experts.
This article only talks about beginners digging a hole for themselves.
Doesn't mention the speedup that experts get.
I'm my post 12 years as a corporate trainer, I've worked with lots of companies, teaching how to code, collaborate, and what makes code good. I've also used AI a lot and can use it to quickly write code better than 95% of software engineers. (Sample size one disclaimer)
Did you read the article? Ctrl+F "experts". It's right there in paragraph 6, complete with a citation from a NBER study (directly contradicting your anecdotal evidence). You don't agree with the author's position, that's fine. But this is a weird lie.
I think this goes to a broader point: developers aren't necessarily hired to write code.
They're hired to be responsible for some part of the product.
Introducing AI doesn't remove that responsibility.
Folks tend to focus on the code and the tools they're using (maybe I'm cynical from years in the industry). I don't think your boss wants to do your job, even if they could use AI to do it. I think your boss wants to have a headcount, and he wants the headcount to be responsible for the product.
Would add my biggest tip to that: TDD. Most people omit it.
There is a difference between:
- write code, write tests
And
- write tests, write code
Had another agentic (vibe) coding experience, which confirmed that for me. Creating an SDK for a $500 light so I can control it from my Steam Deck instead of my phone (no SDK existed before yesterday). For anyone interested, I'm teaching my vibe coding (I meant agentic) tutorial at pycon next week. The 3-hour-long version should be posted to YouTube soon thereafter.
Not in the gaming scene at all, but my thoughts were, "Why isn't black the first product developed?" Isn't that a standard, and probably has more consumer interest?
And I don't understand why they can't just mold all the colors in parallel. If brown is in production, why can't they experiment with black at the same time?
Business logic is usually the most substantial part of legacy systems in my experience, so I imagine so.
Not to be too negative but a lot of modern software complexity is a prison of our own making, that we had time to build because our programs are actually pretty boring CRUD apps with little complex business logic.
I can only assume there's a ton of domain knowledge accrued over those years and beyond baked into the legacy code, that an LLM can just scoop up in a minute.
How much time to verify and validate that large corpus of code that they generated? Not including back and forth to get rid of hallucinations and other mistakes.
How does the code look? I am curious if there is proper usage of abstractions, or is logic just kind of all over the place?
Some part of me feels like LLM generated code is great if one cares about the solution, but leaves a lot to be desired if one actually cares about code quality. Then again, maybe I am just bad as using LLMs -- I prefer the chat over lettings LLMs do the work for me.
This isn't one article, it's a "fragments" post with five separate small thoughts. They happen to all be about LLMs so I can see why it would read as a single article, but it's not.
This article only talks about beginners digging a hole for themselves.
Doesn't mention the speedup that experts get.
I'm my post 12 years as a corporate trainer, I've worked with lots of companies, teaching how to code, collaborate, and what makes code good. I've also used AI a lot and can use it to quickly write code better than 95% of software engineers. (Sample size one disclaimer)
reply