It's very problematic for companies, mainly because of the tooling. Large companies are equating lovable, replit, bold, V0 with Claude Code, Codex, etc all under the "Vibe Coding" banner.
I try to fit the former under the banner of "Prompt-to-app Tools" and the latter as "Autonomous AI Engineering"
The ascendancy of non-descriptive, jargon for everything is irritating as hell. If something is supposed to mean "AI-generated code" then it needs to contain at least the important word from that description. Sad that this has to be explained now.
Agreed. If your identity is your ability to bang away on a keyboard writing instructions to a computer in Python (or any other "language"), you're in for a bad time.
If your identity is solving difficult, domain-specific software-based problems, efficiently and securely, it doesn't matter if your instructions are written in English, French or... Python.
One point where, I think, the analogy fails is context.
If one wants to modify a code base, it's necessary to be able to, sort of, load the program into ones head and then work off a mental model. The "slowness" of traditional development and the tooling around it gave people enough time to do this and over time, get really good at a navigating and changing a code base.
With LLMs being able to generate huge amounts of code in a short time, this is missing. The LLM doesn't fully know what it generated and the nuances. The developer doesn't have the time to absorb all that so at the end of the day, you have something running which nobody (including the original AI author) really understands. That's risky.
Of course, there are ways to mitigate and handle this I don't know if the original analogy is missing this.
Is that a "yes" on lint rules? AI needs determinism to block commits because once the slop hits code review, it's already a gigantic waste of time. AI needs self-correcting loops.