Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.

For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.

Sort of P vs. NP for questions.



> For example say you have a complex medical problem.

Or you go to a doctor instead of imagining answers.


You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.


Doctors are still better than LLMs, by a lot.


All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.


To add to that real doctors have incentives which lead to malpractice. Malpractice is not a minor issue


Medical was just one example, replace with anything you like.

As another example, you can give the AI a photo of something to have it name what that thing is. Then you can check the thing by its name on Google to see if it matches. Much easier than describing the thing (plant, tool, etc) to Google.


Having the wrong information can be more detrimental than having no information at all. In the former case, confident actions will be take. In the latter case, the person will be tentative wich can reduce the area of effect of bad decisions.

Imagine the lambda person confronted with this:

  sudo rm -rf /
What is the better situation, having no understanding of what it does or believing that another action will take place?


The process I'm suggesting is:

1. You have a complex or vague question that you can't search easily via Google etc

2. You ask the AI and it converts that to concrete searchable suggestions (in this case "sudo rm -rf /")

3. You search "sudo rm -rf /" to check the answer.

Step 3 is designed to (hopefully) catch this kind of problem.


literally the LAST place I would go (I am American)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: