Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This problem is has already gotten so much better. In my experience it's no longer 10% of the time (I'd estimate more like 1%). In the end, you still need to use judgement; maybe it doesn't matter if it's wrong, and maybe it really does. It could be citing papers, and even then you don't know if the results are reproducible.


Has it actually become that much better or have you let your standards and judgment lapse because you want to trust it?

How would you even know to evaluate that?


Ya I've had basically this question for a while. My assumption is that most of the time people search the internet to answer questions they DONT know the answer to.

If an LLM gives you a response to that question, how do you know if its right or wrong without already knowing the answer or verifying it some other way? Is everyone just assuming the ai answers are right a majority of the time? Have there been large scale verification of a wide variety of questions that I'm not aware of?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: