Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs will not be doing that. I wish they could, but they just spit out whatever without verifying anything. Even in Cursor which has the agent tell you to run the test script they generated to verify the output, it just says “yep seems fine to me!”.

AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).



That’s where you, human, come into the scene.


And that’s where I end up wasting more time investigating and fixing issues, rather than creating a solution ;)

I only use AI for small problems rather than let it orchestrate entire files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: