One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.
For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.
You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.
All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.
Medical was just one example, replace with anything you like.
As another example, you can give the AI a photo of something to have it name what that thing is. Then you can check the thing by its name on Google to see if it matches. Much easier than describing the thing (plant, tool, etc) to Google.
Having the wrong information can be more detrimental than having no information at all. In the former case, confident actions will be take. In the latter case, the person will be tentative wich can reduce the area of effect of bad decisions.
Imagine the lambda person confronted with this:
sudo rm -rf /
What is the better situation, having no understanding of what it does or believing that another action will take place?
For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.
Sort of P vs. NP for questions.