Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is exactly how the public will learn "not to ask 2+2". When you have a well trained professional using an LLM it's all great. They know how to separate hallucination from actually good results as you do. The problem lies with the general public and new workers who will, no questions about it, use the AI generated results as some sort of truth.


Maybe use an LLM to detect when the public is asking the wrong question and display a message saying "As a large language model, I don't know how to count."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: