I am talking about what an ordinary person thinks an answer is. If the AI industry could pull its collective head of its ass it would notice that humans are looking for answers that are factually true and not probabilistically close to the concept of a plausible answer to the given question. To put it simply, if I ask my bank for a statement for the last month, I expect to see a list of actual transactions and I expect the numbers to add up, while AI fanboys are happy with output that looks like a bank statement, but has made up entries with random amounts paid in or out. I will likely get a lecture on how it's a wrong domain to apply AI to but the core objection stands--humans expect facts and order whilst AI cannot tell fact from made up stuff and will keep on generating fake answers that look like what humans are looking for. Humans are wired for survival and for processing information in a way that allows us to turn chaos into order, pattern, plan, or action script. This is why we find it so easy to spot AI generated content and why we reject it.
Ok. Thanks for your clarification, that makes sense. I see you're being downvoted, possibly because your tone is weird but I think it would do to know that he's not wrong... People expect an AI to both interpret fuzzy inputs and give definitive answers out. I call it the "Mr data fallacy". Unfortunately the reality is that garbage in, garbage out... Our inputs are garbage so there must be some likelihood of garbage.
I think maybe the best we can hope for is to push garbage responses to the edges: when it gets it wrong it does it so obviously wrong (e.g. completely misformatted) that the answer is not acceptable to the user and obviously so.