Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They aren't really bugs though in the traditional sense because all LLMs ever do is "hallucinate", seeing what we call a hallucination as something fundamentally different than what we consider a correct response is further anthropomorphising the LLM.

We just label it with that word when it statistically generates something we know to be wrong, but functionally what it did in that case is no different than when it statistically generated something that we know to be correct.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: