Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.

Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.

I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.



Most of the people doing the most rote and monotonous work were and are doing so in some of the least productive circumstances, with clear ways of increasing speed and productivity.

If development velocity was truly an important factor in these businesses, we'd migrated away from that gang of four ass Java 8 codebase, given these poor souls offices, or at least cubicles to reduce the noise, we wouldn't make them spend 3 hours a day in ceremonial meetings.

The reason none of this happens is that even if these developers crank out code 10x faster, by the time it's made it past all the inertia and inefficiencies of the organization, the change is nearly imperceptible. Though the bill for the new office and the 2 year refactoring effort are much more tangible.


Yep. It's ridiculous to talk about 10x or 5x or 2x anything in any but the smallest companies. All this talk about programmer velocity is micro-optimizing something that's not a bottleneck.


I’ve been thinking a lot about this. I think that AI software automation tools are disproportionately more useful in greenfield work done by small or tiny organizations. By an order of magnitude, maybe 2 in some cases.

What that means is anyone’s guess, but it seems like it should result in a Cambrian explosion of disruptive new companies, limited in scope by the idea space.

The thing about small teams is, with a few exceptions, the biggest challenges are typically funnels for users and product-market fit, overcoming and exploitation of network effects, etc… so even in small orgs, if you make 30 percent of the problem 4x faster/smaller you still have the other 70 percent which is now 92.5% of the problem.

This applies even more acutely in larger organizations… so for them, 99 percent of the problem remains.

Intangibles in an organization like reluctance, education, and organizational inertia fill the gap left by software acceleration, and in the end you only see tiny gains, if any.

What really happened, on an organizational scale, is that software development costs went down. We wouldn’t expect a wage collapse in coding to foment an explosive revolution in company profitability or dynamism. We shouldn’t expect those things of LLM assistance.

We should look at it as a reduction in cost with potentially dangerous side effects if not managed carefully, with an especially big reduction in r&d development costs.


As if that weren't enough, LLM coding "productivity" is measured in lines of code, of all things.


Literally the most useful LLMs have been to me is dealing with the pile of corporate bullshit we have to put up with day to day.

For example, we have to plan 8 to 12 sprints in advance. Full acceptance criteria, story points, and slotted into a sprint with points balanced across the team. Of course this is utterly useless, anything past the second sprint is going to be wrong, but they want it done. LLMs got me through that in a few hours instead of a few days.


Making the bullshit cheaper to inflict only means that soon you will be graced with even more of it. Bullshit expands until capacity for it is exhausted.


>all the inertia and inefficiencies of the organization

Honestly you can probably use this as a means to measure the amount of regulation, graft, and corruption in an economy.

In a wild west free for all code velocity would likely be very fast with software popping up, changing rapidly, and some quickly disappearing.

But in an economy that doesn't care what you make, then who you pay or what laws you buy is far more important.


In many organizations, software just isn't what makes the money. It's a supporting role at best. The software needs to work reliably, and it needs to keep working for a long time, but it doesn't need to gain new features at a rapid pace.

If the janitors swept the floors 10x faster, we wouldn't see any KPIs to shoot up from that. You still need it to happen regularly and reliably and on demand in case there's a mess, but it doesn't need to be fast.


>If the janitors swept the floors 10x faster, we wouldn't any KPIs to shoot up from that

I mean I'd expect that you'd see a decrease in the number of janitor staff as was convenient per the companies policies. That and janitorial services would turn into contracted services rather than in house positions. Oddly enough both these things have already occurred as tooling to janitors became better and the legal incentives made it cheaper.


Libraries create boundaries, which are in most cases arbitrary, that then limit the way you can interact with code, creating more boilerplate to get what you want from a library.

Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.

It takes discipline to NOT create arbitrary function signatures and short-lived intermediate data structures or type definitions. This is the beginning of boilerplate.

So many advances in removing boilerplate are realizing your 5 function calls and 10 intermediate data structures or type definitions, essentially compute a thing that you can do with 0 function calls and 0 custom datatypes and less lines of code.

The abstraction hides how simple the thing you want is.

Problem is that all open source code looks like the bloat described above, so LLMs have no idea how to actually write code that is without boilerplate. The only place where I've seen it work is in shaders, which are usually written to avoid common pitfalls of abstraction.

LLMs are incapable of writing a big program in 1 function and 1 file, that does what you want. Splitting the program into functions or even multiple files, is a step you do after a lot of time, yet all open source looks nothing like that.


> Abstractions are the source of bloat. Without abstractions you can always reduce bloat, or you can reduce bloat in your glue, but you can't reduce glue.

I don’t think I agree. Here is an example.

QTcpSocket socket; socket.connectToHost(QHostAddress::LocalHost, 1234);

Vs:

int clientSocket = socket(AF_INET, SOCK_STREAM, 0);

    sockaddr_in serverAddr;
    serverAddr.sin_family = AF_INET;
    serverAddr.sin_port = htons(1234);
    inet_pton(AF_INET, "127.0.0.1", &serverAddr.sin_addr);

    connect(clientSocket, (sockaddr*)&serverAddr, sizeof(serverAddr))


Yep, people not understanding the value of abstraction is exactly why LLM coded apps are going to be a shit show. You could use them to come up with better abstractions, but most will not.


We already have tools to generate boilerplate, and they work exceptionally well. The LLM just produces nondeterministic boilerplate.

I also don't know what work you do, but I would not characterize the codebases I work in as "small bits of novelty" on boilerplate. Software engineering is always a holistic systems undertaking, where every subcomponent and the interactions between them have to be considered.


I wrote a book a while back where I argued that coding involves choosing what to work on, writing it, and then debugging it, and that we tend to master these steps in reverse chronological order.

It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it


I would argue that chatbots still barely pass the turing test

They have such obvious patterns and tells that humans have already picked up on them and they can eventually sus out that they're talking to an LLM

For instance I heard recently about someone talking (verbally) with an AI voiced customer support. They were very convinced, so they asked the support agent to calculate the product of two large numbers, and it replied with the result instantly

I would argue that fails the chinese room


Barely pass is still a step change though. Either you can be sure what's on the other end of the line or you can't, and I'd say that, while there are still tests that work sometimes, for a purely text based exchange there are none that will work at all times.


No I don’t agree. Just because it’s « boilerplate », that does not mean it’s worthless or doesn’t carry novelty. There is « boilerplate » in building many things, house, cars etc where to add real new stuff it’s « always the same base » but you have to nail that base and there is real value in it. With craft and deep knowledge and pride. Every project is different and not everything can be made from a generic out-of-shelf product


> Just because it’s « boilerplate », that does not mean it’s worthless

Of course it is not. It is needed, by definition.

> or doesn’t carry novelty.

Of course it does not. Why would a piece of code that simply fills a large C structure with constants be innovative?

> Every project is different and not everything can be made from a generic out-of-shelf product

Tangential to use of LLMs for boring boilerplate stuff.


A house doesn't seem a good example, because it is made of physical things.

from foundations import ConcreteStrip

ConcreteStrip(x,y,z)

Doesn't work for houses


There isn’t just concrete in a house. There is hundreds of things that could vary from house to house (even country to country and laws) so it’s more like the building blocs are not only imports of lib but the language itself (raw materials) which makes it a fit analogy for me


Boilerplate has been with us since the dawn of programming.

I still think LLMs as fancy autocomplete is the truth and not even a dig. Autocomplete is great. It works best when there’s one clear output desired (even if you don’t know exactly what it is yet). Nobody is surprised when you type “cal” and California comes up in an address form, why should we be surprised when you describe a program and the code is returned?

Knowledge has the same problem as cosmology, the part we can observe doesn’t seem to account for the vast majority of what we know us out there. Symbolic knowledge encompasses unfathomable multitudes and will eventually be solved by AI but the “dark matter” of knowledge that can’t easily be expressed in language or math is still out in the wild


Books are just simple theses and themes with hundreds of pages of boilerplate


This is a great example actually.

To me, a function is a single sentence within a book. It may approach the larger picture, but that sentence can be reviewed, changed, switched around, killed by an editor.

Some programmers believe they're fantastic sentence writers. They brag about how good of a sentence they write, they're entire worldview has been built on being good sentence creators. Especially within enterprises, you may spend your entire life writing sentences without ever really understanding the whole book.

If your worldview has been built on sentence creation, and suddenly there's a sentence creator AI, you're going to be deathly afraid of it replacing you as a sentence writer.


Hit songs are just simple four-chord loops stretched over three minutes of synthetic boilerplate.


Both the books and the song analogies are incorrect. In the case of code, the users for whom the programmes are written, are not engaging with the statements of the code, they are interacting with interfaces the programmes provide.

This is not the same when it comes to books and music.


There are many orders of magnitude more songs based on four chord loops than there are hit songs. Some people say it's easy to make a hit song. But there are a lot more people that want to do it than those that succeed. So I say no. Your take is reductive, and there is necessarily more to it.


lot of people are saying this


Actually I think this is one of the more tragic outcomes of the LLM revolution: it was already hard to get funding for ergonomic advances in programming before. Funding a new PL ecosystem or major library was no mean feat. Despite that, there were a number of promising advances that could have significantly raised the level of abstraction.

However, LLMs destroy this economic incentive utterly. It now seems most productive to code in fairly low level TypeScript and let the machines spew tons of garbage code for you.


It has been automated as much as possible, boilerplate is the result of people being terrible at designing programming languages. The whole idea of increasingly higher level ones was just that all along. Is there any point in writing a billion ADDC commands in assembly by hand when it takes one line in python?

LLM type systems are the final level of abstraction that lifts it up to literal natural language. Any dev with decent self awareness would admit they were just copying shit from stackoverflow half the time before LLMs anyway, high level languages and libraries just streamline that process with canonical implementations.

The value we provide is turning "person with problem" -> "person with solution to said problem" with as few caveats as possible. A programmer is that arrow, we solve problems. The more code we have to write to solve that problem, the worse we are at our job.


Programmers aren't paid to code.

FORTRAN ("formula translator") was one of the first programs ever written and it was supposed to make coding obsolete. Scientists will now be able to just type in formulas and the computer will just calculate the result, imagine that!


Is this claim historical? As in, it was actually made at the time?


Which claim, exactly? That "coding will be made obsolete"?

Yes, it is. Literally every programming innovation claims to "make coding obsolete". I've seen a half dozen in my own lifetime.


it is like knocking down the vending machine, you have to rock it back and forth a lot before it falls down


"Programmers are obsolete, but like, for real this time guys!"


Abstraction isn't free... even if you had the correct abstraction and the tools to remove the parts you don't need for deployment, there is still the cost of understanding and compiling.

There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).

There's also the platform lockin/competition aspect...


This is actually quite an insightful comment into the mindset of the tech set vs. the many writers and artists whose only 'boilerplate redundant code' is the language itself, and a loose aggregate of ideas and philosophies.

Probably the original sin here is that we started calling them programming languages instead of just 'computer code'.

Also - most of your work is far more than mere novelty! There are intangibles like your intellectual labor and time.


> "Why hasn’t the boilerplate been automated as libraries or other abstractions?"

Because a lot of programmers don't know how to copy-paste or make packages for themselves? We have boilerplate at my work, which comprises of some ready made packages that we can drop in, and tweak as needed, no LLMs required


they don't go as deep as llm's which capture regularities at much more granular levels


I'm not entirely sure what you mean... If something becomes repetitive enough to be boilerplate, we can just make it into a package and keep it around for the next time


> A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions?

Because our ways of programming computers are still woefully inadequate and rudimentary. This is why we have a tons of technique for code reuse, yet we keep reinventing the wheel because they shatter in contact with reality. OOP was supposed to save us all in the 1990s, we've seen how it went.

In other fields we've had a lot of time to figure out basic patterns and components that can be endlessly reused. Imagine if car manufacturers had to reinvent the screw, the piston, the gear and lubricants for every new car model.

One example that has bugged me for a decade is: we've been in the Internet era for decades at this point, yet we spend a lot of time reinventing communication. An average programmer can't spend two days without having to deal with JSON serialization, or connectivity, or sending notifications about the state of a process. What about adding authentication and authorization? There is a whole cottage industry to simplify something that should be, by now, almost as basic as multiplying two integers. Isn't that utter madness? It is a miracle we can build complex systems at all when we have to focus on this minutiae that pop up in every single application.

Now we have intelligences that can create code, using the same inadequate language of grunts and groans we use ourselves in our day to day.


Standardization and regulation forced a lot of the physical industries to change as the industrial revolution progressed. Before that point standards didn't really exist, especially over any large areas and technological progress suffered because of that. After that point solutions became much closer to drag and drop than what they were before.

The question is, at what point of progress will it benefit the software industry.


I was watching a film from 1959 where the main character just swaps headlights and tires from completely different cars like they were just nominally interchangeable.

It feels like the era of standardization already came and went. It seems like product designers are now being deliberately obtuse so that their product quirks create lock-in and therefore more revenue. I expect that genAI will put fuel on this fire.


> fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions?

Sometimes it has. The amount of generated code that selected count(distinct id) from customers would produce is huge.


Time to learn design, how to talk to customers, and how to discover unsolved problems. Used right LLMs should improve your software quality. Make stuff that matters that you can be proud of.


  > how to discover unsolved problems
There are plenty of them and LLMs will not help you much there because unsolved problems are, by definition, out-of-distribution samples.

Neural networks are interpolators, not extrapolators.


what i find hard to digest is not being able to pay rent and dying of old age in a ditch in poverty


dying of old age, what luxury, I’m more worried about starving to death in my mid 50s.


> most of our work is a small bit of novelty against boiler plate redundant code...

Care to share some examples that prove your point?


> why hasn’t the boilerplate been automated as libraries or other abstractions?

Cue the smug Lisp weenies.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: