Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s so much work to be done developing coding related tools that integrate AI and traditional coding analysis and debugging tools.

Also programming needs to be redesigned from the ground up as LLM first.



I am still skeptical about the value of LLM as coding helper in 2025. I have not dedicated myself to an "AI first" workflow so maybe I am just doing it wrong.

The most positive metaphor I have heard about why LLM coding assistance is so great is that it's like having a hard-working junior dev that does whatever you want and doesn't waste time reading HN. You still have to check the work, there will be some bad decisions in there, the code maybe isn't that great, but you can tell it to generate tests so you know it is functional.

OK, let's say I accept that 100% (I personally haven't seen evidence that LLM assistance is really even up to that level, but for the sake of argument). My experience as a senior dev is that adding juniors to a team slows down progress and makes the outcome worse. You only do it because that's how you train and mentor juniors to be able to work independently. You are investing in the team every time you review a junior's code, give them advice, answer their questions about what is going on.

With an LLM coding assistant, all the instruction and review you give it is just wasted effort. It makes you slower overall and you spend a lot of time explaining code and managing/directing something that not only doesn't care but doesn't even have the ability to remember what you said for the next project. And the code you get out, in my experience at least, is pretty crap.

I get that it's a different and, to some, interesting way of programming-by-specification, but as far as I can tell the hype about how much faster and better you can code with an AI sidekick is just that -- hype. Maybe that will be wrong next year, maybe it's wrong now with state-of-the-art tools, but I still can't help thinking that the fundamental problem, that all the effort you spend on "mentoring" an LLM is just flushed down the toilet, means that your long term team health will suffer.'


> And the code you get out, in my experience at least, is pretty crap

I think that belies the fundamental misunderstanding of how AI is changing the goalposts in coding

Software engineering has operated under a fundamental assumption that code quality is important.

But why do we value the "quality" of code?

* It's easier for other developers (including your future self) to understand, and easier to document. * Easier to change when requirements change * More efficient with resources, performs better (cpu/network/disk) * Easier to develop tests if its properly structured

AI coding upends a lot of that, because all of those goals presume a human will, at some point, interact with that code in the future.

But the whole purpose of coding in the first place is to have a running executable that does what we want it to do.

The more we focus on the requirements and guiding AI to write tests to prove those requirements are fulfilled, the less we have to actually care about the 'quality' of the code it produces. Code quality isn't a requirement, its a vestigal artifact of human involvement in communicating with the machine.


An important point, well made.

Often AI critics say things like “quality is bad or “it made coding errors” or “it failed to understand a large code base”.

AI proponents and expert users understand these constraints and know that these are but actually that important.


I don't know how they get their sources, but it would be nice if it was directly from coding documentation (and not random stackoverflow answers) and if those guides were I don't know, more machine readable? (That's not a passive aggressive use of question marks, I'm genuinely just guessing here)


>programming needs to be redesigned from the ground up as LLM first.

Yes, because non-deterministic systems are great softwares. I mean, who want's repeatable execution on the control program for their nuclear submarine or their hospital lighting controls. Why would anyone want a computer capable of actual math running on the President's nuclear "football" when we can have the outputs of hallucinating tools running there.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: