Skill retardation is actually beyond the point. I'm largely raising a counter example for why the following (rough paraphrasing) is not sound: "SOME people figured out how to use these tools to go 2x to 4x faster, you do the same, or you're fired!".
Let's say "n" is the sum complexity of a system. While some developers can take an approach that yields a development output of: (1.5 * log n), the AI tools might have a development output of: (4 * log n)^4/n. That is, initially more & faster, but eventually a lot less and slower.
The parable of the soviet beef farmer comes to mind: In this parable, the USSR mandated its beef farmers increase beef output YoY by 20%, every year. The first year, the heroic farmer improved the health of their livestock, bought a few extra cows and hit their target. The next year, to achieve 20% YoY, the farmer cuts every corner and maximizes every efficiency, they even exchange all their possessions to buy some black market cows. The third year, the farmer can't make the 20% increase, they slaughter almost all of their herd. The fourth year, the farmer has essentially no herd, they can't come close to their last years output - let alone increase it. So far short of quota, the heroic beef farmer then shot himself.
(side-note: Which is also analagous to people not raising their skill levels too, but not my main point - I'm more thinking about how development slows down relative to the complexity and size of a software system. The 'not-increasing skills' angle is arguably there too. The main point is short term trade-offs to achieve goals rather than selecting long term and sustainable targets, and the relationship of those decisions to a blind demand to increase output)
So, instead of working on the insulation of the home, instead of upgrading the heating system, to heat the home faster we burn the furniture. It works.. to a point. Like, what happens when you run out of furniture, or the house catches fire? Seemingly that will be a problem for Q2 of next year, for now, we are moving faster!!
I think this ties into the programming industry quite heavily from the perspective where managers often want things to work just long enough for them to be promoted. Doesn't have to work well for years, doesn't have to have the support tools needed for that, nope - just long enough that they can get the quarterly reward and then move on to not worry about the support mess left behind. To boot too, the feedback cycle for whether something was a good idea in software or not is slow, oftentimes years. AI tools have not been out for a long time, just a couple years themselves, it'll be another few before we see what happens when a system is grown to 5M lines through mostly AI tooling and the codebase itself is 10 years old - will that system be too brittle to update?
FWIW, I'm of the point of view that quality, time and cost are not an iron triangle - it is not a choose two situation. Instead, quality is a requirement for low cost and low time. You cannot move quickly when quality is low (from my experience, the slowdown of low quality can manifest quickly too - on the order of hours. A shortcut taken now can reduce velocity just even later that same day).
Thus, mandates from management to move 2x to 4x faster, when it's not clear that AI tools actually deliver 2x to 4x benefits over the longer term (perhaps not even in the shorter term), feels a lot like the soviet beef farmer parable, or burning furniture to stay warm.
If your AI scaling statement is accurate then the problem will eventually solve itself as organizations that mandated AI usage will start to fall behind their non-AI mandating peers.
My experience so far is that if you architect your systems properly AI continues to scale very well with code base size. It's worth noting that the architecture to support sustained AI velocity improvement may not be the architecture that some human architects have previously grown comfortable with as their optimal architecture for human productivity in their organization. This is part of the learning curve of the tools IMO.
> If your AI scaling statement is accurate then the problem will eventually solve itself as organizations that mandated AI usage will start to fall behind their non-AI mandating peers.
I find one of the biggest differences between junior engineers and seniors is they think differently about how complexity scales. Juniors don't think about it as much and do very well in small codebases where everything is quick. They do less well when the complexity grows and sometimes the codebase just simply falls over.
It's like billiards. A junior just tries to make the most straight forward shot and get a ball in the hole. A senior does the same, but they think about where they will leave the cue ball for the next shot, and they take the shot that leaves them in a good position to make the next one.
I don't see AI as being able to possess the skills that a senior would have to say "no, this previous pattern is no longer the way we to do things because it has stopped scaling well. We need to move all of these hardcoded values to database now and then approach the problem that way." AFAIK, AI is not capable of that at all, it's not capable of a key skill of a senior engineer. Thus, it can't build a system that scales well with respect to complexity because it is not forward thinking.
I'll posit as well that knowing how to change a system so that it scales better is an emergent property. It's impossible to do that architecture up front, therefore an AI must be able to say "gee, this is not going well anymore- we need to switch this up from hardcoded variables to a database - NOW; before we implement anything else." I don't know of any AI that is capable of that. I could agree that when that point is reached, and a human starts prompting on how to refactor the system (which is a sign the complexity was not managed well) - then it's possible to reduce the interest cost of outsized complexity by then using an AI to start managing the AI induced complexity...
>as organizations that mandated AI usage will start to fall behind their non-AI mandating peers.
You're assuming organizations are operating with the goal of quality and velocity in mind. We saw that WFH made people more productive, and had higher quality of life. companies are still trying to enforce RTO as we speak. The productivity was deemed not worth it compared to other factors like real estate, management ego, and punishing the few who abused the priveledge.
We're in weird times and sadly many companies have mature tech by now. They can afford to lose productivity if it helps make number go up.
> If your AI scaling statement is accurate then the problem will eventually solve itself as organizations that mandated AI usage will start to fall behind their non-AI mandating peers.
All things being equal, I would agree. Things are not equal though. The slow down can manifest as: needing more developers for the same productivity, lots of new projects to do things like "break the AI monolith into microservices", all the things that a company needs to do when growing from 50 employees to 200 employees. Having a magicly different architecture is kinda just a different reality, too much chaos to always say that one approach would really be different. One thing though, it does often take 2 to 5 years before knowing whether the chosen approach was 'bad' or not (and why).
Companies that are trying to scale - almost no two are alike. So it'll be difficult to do a peer-to-peer comparison, it won't be apples to apples (and if so, the sample size is absurdly small). Did architecture kill a company, or bad team cohesion? Did good team cohesion save the company despite bad architecture? Did AI slop wind up slowing things down so much that the company couldn't grow revenue? Very hard to make peer-to-peer comparisons when the problem space is so complex and chaotic.
It's also amazing what people and companies can do with just sheer stubbornness. Facebook has over (I hear) 1000+ engineers just for their mobile app.
> My experience so far is that if you architect your systems properly AI continues to scale very well with code base size. It's worth noting that the architecture to support sustained AI velocity improvement may not be the architecture that some human architects have previously grown comfortable with as their optimal architecture for human productivity in their organization
I fear this is the start of a no-true-scotsman argument. That aside, what is the largest code base size you have reached so far? Would you mind providing some/any insight into the architecture differences for an AI-first codebase? Are there any articles or blog posts that I could read? I'm very interested to learn more where certain good architectures are not good for AI tooling.
AI likes modular function grammars with consistent syntax and interfaces. In practice this means you want a monolithic service architecture or a thin function-as-a-service architecture with a monolithic imported function library. Microservices should be avoided if at all possible.
The goal there is to enable straightforward static analysis and dependency extraction. With all relevant functions and dependencies defined in a single codebase or importable module, you can reliably parse the code and determine exactly which parts need to be included in context for reasoning or code generation. LLMs are bad at reasoning across service boundaries, and even if you have OpenAPI definitions the language shift tends to confuse them (and I think they're just less well trained on OpenAPI specs than other common languages).
Additionally, to use LLMs for debugging you want to have access to a single logging stream, where they can see the original sources of the logging statements in context. If engineers have to collect logs from multiple locations and load them into context manually, and go repo hopping to find the places in the code emitting those logging statements, it kills iteration speed.
Finally, LLMs _LOVE_ good documentation even more than humans, because the humans usually have the advantage of having business/domain context from real world interactions and can use that to sort of contextually fumble their way through to an understanding of code, but AI doesn't have that, so that stuff needs to be made as explicit in the code as possible.
The largest individual repo under my purview currently is around 250k LoC, my experience (with Gemini at least) is that you can load up to about 10k LoC functionally into a model at a time, which should _USUALLY_ be enough to let you work even in huge repos, as long as you pre-summarize the various folders across the repo (I like to put a README.md in every non-trivial folder in a repo for this purpose). If you're writing pure, functional code as much as possible you can use signatures and summary docs for large swathes of the repo, combined with parsed code dependencies for stuff actively being worked on, and instruct the model to request to get full source for modules as needed, and it's actually pretty good about it.
Let's say "n" is the sum complexity of a system. While some developers can take an approach that yields a development output of: (1.5 * log n), the AI tools might have a development output of: (4 * log n)^4/n. That is, initially more & faster, but eventually a lot less and slower.
The parable of the soviet beef farmer comes to mind: In this parable, the USSR mandated its beef farmers increase beef output YoY by 20%, every year. The first year, the heroic farmer improved the health of their livestock, bought a few extra cows and hit their target. The next year, to achieve 20% YoY, the farmer cuts every corner and maximizes every efficiency, they even exchange all their possessions to buy some black market cows. The third year, the farmer can't make the 20% increase, they slaughter almost all of their herd. The fourth year, the farmer has essentially no herd, they can't come close to their last years output - let alone increase it. So far short of quota, the heroic beef farmer then shot himself.
(side-note: Which is also analagous to people not raising their skill levels too, but not my main point - I'm more thinking about how development slows down relative to the complexity and size of a software system. The 'not-increasing skills' angle is arguably there too. The main point is short term trade-offs to achieve goals rather than selecting long term and sustainable targets, and the relationship of those decisions to a blind demand to increase output)
So, instead of working on the insulation of the home, instead of upgrading the heating system, to heat the home faster we burn the furniture. It works.. to a point. Like, what happens when you run out of furniture, or the house catches fire? Seemingly that will be a problem for Q2 of next year, for now, we are moving faster!!
I think this ties into the programming industry quite heavily from the perspective where managers often want things to work just long enough for them to be promoted. Doesn't have to work well for years, doesn't have to have the support tools needed for that, nope - just long enough that they can get the quarterly reward and then move on to not worry about the support mess left behind. To boot too, the feedback cycle for whether something was a good idea in software or not is slow, oftentimes years. AI tools have not been out for a long time, just a couple years themselves, it'll be another few before we see what happens when a system is grown to 5M lines through mostly AI tooling and the codebase itself is 10 years old - will that system be too brittle to update?
FWIW, I'm of the point of view that quality, time and cost are not an iron triangle - it is not a choose two situation. Instead, quality is a requirement for low cost and low time. You cannot move quickly when quality is low (from my experience, the slowdown of low quality can manifest quickly too - on the order of hours. A shortcut taken now can reduce velocity just even later that same day).
Thus, mandates from management to move 2x to 4x faster, when it's not clear that AI tools actually deliver 2x to 4x benefits over the longer term (perhaps not even in the shorter term), feels a lot like the soviet beef farmer parable, or burning furniture to stay warm.