Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Source on hallucination rates falling?

I use LLMs 20-30 times a day and while it feels invaluable for personal use where I can interpret the responses at my own discretion, they still hallucinate enough and have enough lapses in logic where I would never feel confident incorporating them into some critical system.



My own experience, but if you insist

https://www.visualcapitalist.com/ranked-ai-models-with-the-l...

99% of systems aren't critical and human validation is sufficient. My own use case, it is enough to replace plenty of hours of human labour.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: