First off, is there any? That's making an assumption, one which can just as easily be attributed to human-written code. Nobody writes debt-free code, that's why you have many checks and reviews before things go to production - ideally.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
> Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
Is that what's going to happen? These are still LLMs. There's nothing in the future generations that guarantees those changes would be better, if not flat out regressions. Humans can't even agree on what good code looks like, as its very subjective and context heavy with the skills of the team.
Likely, you ask gpt-6 to improve your code and it just makes up piddly architecture changes that don't fundamentally improve anything.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)