Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the type of AI coding being used also has an effect on a person's perception of the prevalence of "hallucinations" vs other errors.

I usually use an agentic workflow and "hallucination" isn't the first word that comes to my mind when a model unloads a pile of error-ridden code slop for me to review. Despite it being entirely possible that hallucinating a non-existent parameter was what originally made it go off the rails and begin the classic loop of breaking things more with each attempt to fix it.

Whereas for AI autocomplete/suggestions, an invented method name or argument or whatever else clearly jumps out as a "hallucination" if you are familiar with what you're working on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: