Hacker News new | past | comments | ask | show | jobs | submit login

This is exactly what I've been saying: it's not that LLMs sometimes "hallucinate" and thus provide wrong answers, it's that they never even provide right answers at all. We as humans ascribe "rightness" to the synthetic text extruded by these algorithms after the fact as we evaluate what it means. The synthetic text extruder doesn't "care" one way or another.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: