Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> extremely painful debugging experiences.

I'd claim that if you're debugging the code - or even looking at it for that matter - that you're using AI tools the wrong way.



I'd be very interested to know of a way to make it work with AI that doesn't require debugging if you can illustrate.


Make what work with AI?


Congratulations. You allow the AI to make some new subroutine, and you immediately commit and merge the changes to your system. You run it, and it executes without throwing any immediate errors.

The business domain is far more nuanced and complex, and your flimsy "does it compile" test for the logic doesn't even begin to cover the necessary gamut of the input domain which you as the expert might have otherwise noticed had you performed even a cursory analysis of the LLM generated code before blindly accepting it.

Nice to know that I'm going to be indefinitely employed fixing this kind of stuff for decades to come...


>You allow the AI to make some new subroutine

Again, you're using AI the wrong way.


This is exactly my impression of the summary of these kinds of posts and, I'm speculating here, maybe where there is such a stark difference.

I'm guessing that the folks who read the output and want to understand it deeply and want to "approve" it like a standard pull request are having a very different perspective and workflow than those who are just embracing the vibe.

I do not know if one leads to better outcomes than the other.


Are you serious? Why not just vibe work with your human coworkers and merge to master then? Let's see what the outcome is!


> Are you serious?

I am serious and didn't think anything I said here was contentious. Which part are you feeling incredulity over? I'll try to clarify if I've been unclear or learn from your perspective.


You seem to be unsure if checking the code is likely to lead to better outcomes.


I'm not passing judgement in that comment about if carefully crafting and curating code is a net win productivity wise over quickly churning out ai slop that is ostensibly viable long term.

I _personally_ follow the careful crafting and review approach. I think a deep understanding of the systematic ideas underlying a project is critical to continuing to improve it effectively and I believe code that is easy to understand is extremely valuable for that. I'm my experience so far the ai stuff seems to create more, harder work for me in the long run and I end up with worse understanding after than if I just wrote it myself but I recognize that's just my person anedata so far and we're at the beginning of this ai coding landscape.

I'm also open to the idea that lots of software we currently carefully craft by hand could likely be autogenerated ad hoc as needed. Moreover, good engineers are extremely expensive and most software is quite bad anyway so I expect there is a new balance to be found here.

For my 2 cents, I'm not removing my or my company's ability to understand our own software in exchange for vendor lock in with ai (even if they weren't all owned by the current set of megalomaniacs) but I see the angles and I'm not afraid to consider the tradeoffs or have the conversation.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: