Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is a very weak excuse to avoid these tools.

I know the tools and environments I am working in. I verify the implementations I make by testing them. I review everything I am generating.

The idea that AI is going to trick me is absurd. I'm a professional, not some vibe coding script kiddie. I can recognise when the AI makes mistakes.

Have the humility to see that not everyone using AI is someone who doesn't know what they are doing and just clicks accept on every idea from the AI. That's not how this works.



AI is already tricking people -- images, text, video, voice. As these tools become more advanced, so does the cost of verification.


We're talking about software development here, not misinformation about politics or something.

Software is incredibly easy to verify compared to other domains. First, my own expertise can pick up most mistakes during review. Second, all of the automated linting, unit testing, integration testing, and manual testing is near guaranteed to pick up a problem with the functionality being wrong.

So, how exactly do you think AI is going to trick me when I'm asking it to write a new migration to add a new table, link that into a model, and expose that in an API? I have done each of these things 100 times. It is so obvious to me when it makes a mistake, because this process is so routine. So how exactly is AI going to trick me? It's an absurd notion.

AI does have risks with people being lulled into a false sense of security. But that is a concern in areas like getting it to explain how a codebase works for you, or getting it to try to teach you about technologies. Then you can end up with a false idea about how something works. But in software development itself? When I already have worked with all of these tools for years? It just isn't a big issue. And the benefits far outweigh it occasionally telling me that an API exists that actually doesn't exist, which I will realise almost immediately when the code fails to run.

People who dismiss AI because it makes mistakes are tiresome. The lack of reliability of LLMs is just another constraint to engineer around. It's not magic.


> Software is incredibly easy to verify compared to other domains

This strikes me as an absurd thing to believe when there's almost no such thing as bug-free software


Yes, maybe using the word "verify" here is a bit confusing. The point was to compare software, where it is very easy to verify the positive case, to other domains where it is not possible to verify anything at all, and manual review is all you get.

For example, a research document could sound good, but be complete nonsense. There's no realistic way to verify that an English document is correct other than to verify it manually. Whereas, software has huge amounts of investment into testing whether a piece of software does what it should for a given environment and test cases.

Now, this is a bit different to formally verifying that the software is correct for all environments and inputs. But we definitely have a lot more verification tools at our disposal than most other domains.


That's fair, makes sense. Thank you for explaining further


  Software is incredibly easy to verify compared to other domains.
Rice, Turing, and Godel would like a word.


compared to other domains

It's right there in the quote.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: