No, what's happening here is that you're using a different definition of "boilerplate" than the adopters are using. To you, "boilerplate" is literally a chunk of code you copy and paste to repeatedly solve a problem (btw: I flip my shit when people do this on codebases I work on). To them, "boilerplate" represents a common set of rote solutions to isomorphic problems. The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.
Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.
> To them, "boilerplate" represents a common set of rote solutions to isomorphic problems.
That's what libraries and frameworks are here for. And that's why no experienced engineers consider those an issue. What's truly important is the business logic, then you find a set of libraries that solves the common use cases and you write the rest. Sometimes you're in some novel space that doesn't have libraries (new programming language), but you still have specs and reference implementation that helps you out.
The actual boilerplate is when you have to write code twice because the language ecosystem don't have good macros à la lisp so you can invent some metastuff for the problem at end. (think writing routers for express.js)
People keep saying this. LLMs (not even agents; LLMs themselves, intrinsically) use frameworks. They're quite good at them. Frameworks make programs more legible to LLMs, not less.
> LLMs (not even agents; LLMs themselves, intrinsically) use frameworks.
That's not what I see the parent comment saying. They're not saying that LLMs can't use frameworks, they're saying that if you have rote solutions that you are being forced to write over and over and over again, you shouldn't be using an LLM to automate it, you should use a framework and get that code out of your project.
And at that point, you won't have a ton of boilerplate to write.
The two sides to this I see online are between the people who think we need a way to automate boilerplate and setup code, and the people who want to eliminate boilerplate (not just the copy-paste kind, but also the "ugh, I've got to do this thing again that I've done 20 times" kind).
Ideally:
> a common set of rote solutions to isomorphic problems
Should not be a thing you have to write very often (or if it is, you should have tools that make it as quick to implement as it would be to type a prompt into an LLM). If that kind of rote repetitive problem solving is a huge part of your job, then to borrow your phrasing: the language or the tools you're using have let you down.
Copy pasting code that could be abstracted is not a usage of boilerplate I've ever encountered, usually it's just a reference to certain verbose languages where you have to write a bunch of repetitive low-entropy stuff to get anywhere, like getters and setters in java classes.
Getters and setters definitely fall into "copy/paste + macro" territory for me. Just copy/paste your field list and run a macro that turns each field into a getter and setter. Or use an IDE shortcut. Lombok obviates all this anyway of course.
> The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.
Assuming something like "a REST endpoint which takes a few request parameters, makes a DB query, and returns the response" fits what you're describing, you can absolutely copy/paste a similar endpoint, change the parameters and the database query, and rename a couple variables—all of which takes a matter of moments.
Naturally code that is being copy-pasted wholesale with few changes is ripe to be abstracted away, but patterns are still going to show up no matter what.
Sure—but I can also write it pretty instantly, with some judicious copy/pasting.
And the less instantly I can write it, the more petty nuances there are to deal with—things like non-trivial validation, a new database query function, a header that I need to access—the more ways an LLM will get it subtly wrong.
If I treat it as more than a fancy autocomplete, I have to spend all my time cleaning up after it. And if I do treat it as fancy autocomplete, it doesn't save that much time over judicious copy/pasting.
It used to be the library one wrote to solve them at once - the solution being isomorphic that is, not the problems. I.e. given problems a b c d, f(a) solves a, g(b) solves b, … j(d) solves d - turns out if you can write a common function F instead of f g h j separately, then clearly F is isomorphic to f g h j under our constraints.
I guess in this case the morphism is the similar or same prompt to generate f g h j.
But what do you make of the parent’s second paragraph? This is the way I feel as well - I would rather not spend my time asking AI to do something right that I could just do myself.
I bit the bullet last week and tried to force myself to use a solution built end to end by AI. By the time I’d finished asking it to make changes (about 25 in total), I would’ve had a much nicer time doing it myself.
The thing in question was admittedly partially specified. It was a yaml-based testing tool for running some scenarios involving load tests before and after injecting some faults in the application. I gave it the yaml schema up front, and it did a sensible job as a first pass. But then I was in the position of reading what it wrote, seeing some implicit requirements I’d not specified, and asking for those.
Had I written it myself from the start, those implicit requirements would’ve been more natural to think about in the progression of iterating on the tool. But in this workflow, I just couldn’t get in a flow state - the process felt very unnatural, not unlike how it would’ve been to ask a junior to do it and taking 25 rounds of code review. And that has always been a miserable task, difficult to force oneself to stay engaged with. By the end I was much happier making manual tweaks and wish I’d have written it myself from the start.
I'm firmly convinced at this point that there is just no arguing with the haters. At the same time, it feels like this transition is as inevitable as the transition to mobile phones. LLM's are everywhere, and there's no escaping it no matter how much you might want to.
There's always some people that will resist to the bitter end, but I expect them to be few and far between.
It's not really a matter of wanting to escape them. I've used them. They're not good for writing code. They feel zippy, but you have to constantly clean up after them. It's as slow and frustrating as trying to walk a junior engineer through a problem they don't fully understand. I'd rather do it myself.
If the AI agent future is so inevitable, then why do people waste so much oxygen insisting upon its inevitability? Just wait for it in silence. It certainly isn't here yet.
> there's no escaping it no matter how much you might want to
And if we accept that inevitability, it becomes a self-fulfilling prophecy. The fact that some people _want_ us to give in is a reason to keep resisting.
> Not everyone is just cranking out hacked together MVPs for startups
Now here’s the fun part: In a really restrictive enterprise environment where you’ve got unit tests with 85% code coverage requirements, linters and static typing, these AI programming assistants actually perform even better than they do when given a more “greenfield” MVP-ish assignment with lots of room for misinterpretation. The constant “slamming into guardrails” keeps them from hallucinating and causes them to correct themselves when they do.
The more annoying boxes your job makes you tick, the more parts of the process that make you go “ugh, right, that”, the more AI programming assistants can help you.
Unfortunately high code coverage is misaligned with high quality code.
If one copy-pastes a routine to make a modified version (that’s used), code coverage goes UP. Sounds like a win win for many…
Later, someone consolidates the two near identical routines during a proper refactoring. They can even add unit tests. Guess what? Code coverage goes DOWN!
Sure, having untested un-executed code is a truly horrible thing. But focusing on coverage can be worse…
Didn't we quickly turn against TDD after it became unavoidably obvious that it doesn't work?
What is which this new paradigm where we act like everything is easily measurable and every measure is perfectly aligned with what we want to measure. We know these things aren't possible. It doesn't take much thought to verify this. Do you believe you are smart enough to test all possible issues? No one is. There'd be no CVEs if you could and we'd have solved all of physics centuries ago
A human might see a gap in guardrails and avoid it, or upon seeing unexpected behavior they might be able to tell that a guardrail was breached and have some intuition of where. An LLM will happy burst through a gap in the guardrails, claim it has solved the problem, and require just as much human effort to fix and even more long-term maintenance because of less code familiarity.
An LLM, which has deep advantages over humans in rote pattern recognition, will see gaps in guardrails that humans miss. In the end, it's a wash. What's not a wash is all the rest of the instrumentation and formal methods stuff an LLM agent enables, all of which is available to every dev team today but doesn't get used because type safety, semgrep rules, advanced linters, and TLA+ specs are too time-consuming for humans to deploy. Not a problem for an LLM agent.
And not everyone working in things that aren’t “hacked together MVPs” has your experience. You can read any number of reports about code generated at FAANG, incident response tooling that gets to RCA faster, etc.
There are obviously still things it can’t do. But the gap between “I haven’t been able to get a tool to work” and “you’re wrong about the tool being useful” is large.
Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.