To paraphrase a wise bot: like everything else in life, coding is just a primitive, degenerate form of bending.
Humans relate concepts to established concepts and world views, it’s a form of prejudice, and has philosophical resonance with premature optimization being the root of some not great thinking.
To me ending at 0:55 is a million times more logical than starting at 0:05… and if there was an issue then rolling back to 0:50, or 0:45, or whatever it takes for honest accounting and/or meeting discipline to emerge.
Decades ago at an engineering firm I worked for it was baked into the groupware settings.
The smelly basement nerd running IT seemed normal back then, but here are in 2026… Turns out he was an unsung smelly genius ahead of his time. A giant among men who bathe.
Given a 25% hypothetical boost: there are categories of errors vibe testing vibed code will bring in, we know humans suck at critical reading. On the support timeline of an Enterprise product that’s gonna lead to one or more true issues.
At what point is an ‘extra’ 25% coding overhead worth it to ensure a sane human reasonably concerned about criminal consequences for impropriety read all code when making it, and every change around it? To prevent public embarrassment that can and will chase off customers? To have someone to fire and sue if need be?
[Anecdotally, the inflection point was finding tests updated to short circuit through mildly obfuscated code (introduced after several reviews). Paired with a working system developed with TDD, that mistake only becomes obvious when the system stops working but the tests don’t. I wrote it, I ran the agents, I read it, I approved it, but was looking for code quality not intentional sabotage/trickery… lesson learned.]
From a team lead perspective in an Enterprise space, using 25% more time on coding to save insane amounts of aggressive and easy to flubb review and categories of errors sounds like a smart play. CYA up front, take the pain up front.
Not that you are wrong, but you don't seem to understand my point. I spend less than 25% of my time writing code. I also do code review, various story/architecture planning, testing, bug triage, required training, and other management/people activities; these take up more than 75% of my time. Even if AI could do vibe code as well as me infinitely fast it still wouldn't be a 75% improvement.
Nerd circles are in no way immune to fashion, and often contain a strong orthodoxy (IMO driven by cognitive dissonance caused by the humbling complexity of the world).
Cargo cults, where people reflexively shout slogans and truisms, even when misapplied. Lots of people who’ve heard a pithy framing waiting for any excuse to hammer it into a conversation for self glorification. Not critical humble thinkers, per se.
Hype and trends appeal to young insecure men, it gives them a way to create identity and a sense of belonging. MS and Oracle and the rest are happy to feed into it (cert mills, default examples that assume huge running subscriptions), even as they get eaten up by it on occasion.
Also the terrible code bases and orgs that are out there… the amount of churn bad JavaScript solutions with eight frontend frameworks might necessitate and how tight systems code works are very different.
It's not just about them (link, Oracle), there is terrible code all over the place. Games, business software, everything.
It has nothing to do with the language! Anyone who claims that may be part of the problem, since they don't understand the problem and concentrate on superficial things.
Also, what looks terrible may not be so. I once had to work on an in-house JS app (for internal cost reporting and control). It used two GUI frameworks - because they had started switching to another one, but then stopped the transition. Sounds bad, yes? But, I worked on the code of the company I linked above, and that "terrible" JS app was easy mode all the way!
Even if it used two GUI frameworks at once, understanding the code, adding new features, debugging, everything was still very easy and doable with just half a brain active. I never had to ask my predecessor anything either, everything was clear with one look at the code. Because everything was well isolated and modular, among other things. Making changes did not affect other places in unexpected ways (as is common in biology).
I found some enlightenment - what seems to be very bad at first glance may not actually matter nearly as much as deeper things.
Speaking from ignorance or speaking from ego or both? There's only three major players, React, Vue or Angular. Angular is batteries included. The other two have their lib ecosystem and if not you can easily wrap stuff around regular js libs. That's about it. The JS ecosystem sees many newcomers, it's only natural that some of the codebases were written poorly or that the FOTM mentality gets a lot of steam, against proper engineering principles.
Anecdotally the worst code I've ever seen was in a PHP codebase, which to me, would be the predecessor of JavaScript in this regard, bolstering many junior programmers maintaining legacy (or writing Greenfield ) systems due to cheap businesses being cheap. Anyways, thousands long LoC files, with broken indentation and newlines, interspersed JS and CSS here and there. Truly madness, but that's another story. Point is JavaScript is JavaScript, and other fields like systems and backend, mainly backend, act conceited and talk about JS as if it was the devil, when things like C++, Java, aren't necessarily known for having pretty codebases.
The agile methodologies are built around adaptive and corrective iterations of work. If an org is not adapting process, if they are not correcting process issues, they are by definition ‘not doing it right’ because they are fundamentally doing something else.
Agile can/should/must (d)evolve into waterfall in all but name if that’s the local optimum. The agile methodologies response to problems is to solve those issues through frequent iterative localized change. Failing to apply a methodology isn’t a methodological failure, per se.
Business process mislabelling and misdirected frustration about lacking management are not examples of the No True Scotsman fallacy.
Frenchmen are not Scotsmen. Not False Scotsmen nor True Scotsmen. They have a different name for a reason. No matter how many tourists confuse the flags or culture, by definition they are separate and distinct. France and Scotland are literally on different pages in the books.
It is no coincidence there is a ‘typical response’ around this that has not changed for decades. Typical responses are the MBA version of RTFM.
Programmers have the benefit of being able to torture and kill our patients at scale (unit and integration testing), doctors less so. The diagnostic skills one hits in any given doctor may be relatively shallow, plus tired, overworked, or annoyed by a patients self expression… the results I’ve seen are commonly abysmal and care providers are never shocked by poor and misdiagnosis from other practitioners.
I have some statistically very common conditions and a family medical history with explicit confirmation of inheritable genetic conditions. Yet, if I explain my problems A to Z I’m a Zebra whose female hysteria has overwhelmed his basic reasoning and relationship to reality. Explained Z to A, well I can’t get past Z because, holy crap is this an obvious Horse and there’s really only one cause of Horse-itis and if your mom was a Horse then you’re Horse enough for Chronic Horse-itis.
They don’t have time to listen, their ears aren’t all that great, and the mind behind them isn’t necessarily used to complex diagnostics with misleading superficial characteristics. Fire that through a 20 min appointment, 10 of which is typing, maybe in a second language or while in pain, 6 plus month referral cycles and… presto: “it took a decade to identify the cause of the hoof prints” is how you spent your 30s and early 40s.
Yeah same. Also it completely freezes on my iPhone with sufficient code highlighting. It becomes completely unusable until I restart the App, and then breaks once a new message is sent.
In addition to the underlying domain (and I agree, it’s no coincidence that windowed GUI widgets are common in in OOP textbooks), there’s also overlapping spectrums of language expressiveness and object-orientedness to consider.
The principle of least surprise, bad evil and scary techniques in general might be the orthodox, efficient and intuitive for maintainers in context. Templates in GUI frameworks versus business apps, for example.
However many years later: the broad sentiment that MS needed to be broken up into separate Office, Windows, and Dev/Tools organizations was pretty on the money.
Document exchange, formats, and user editing experience have suffered due to their mixed goals and market control, this has real social cost. And with the current ‘copilot everywhere’ push we’re seeing pretty disruptive tech being hammered down a lot of throats. Mature Visual Studio features are being deprecated for subscription based off-site code gen… (which at a distance sounds like MS is struggling and needs extra development help to maintain its flagship development software, if only they had some kind of AI that could help them keep up…)
I dare say we’d be oodles better off with similar crippling fears in the board rooms of some media, energy, and tech conglomerates. The judge was right, and we missed a key chance to set a guiding example.
Humans relate concepts to established concepts and world views, it’s a form of prejudice, and has philosophical resonance with premature optimization being the root of some not great thinking.
reply