If I am trying to interact with a company and they tell me to use their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).
If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place? If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?
This is why I’m a bit vexed by all the hype around LLMs. It reminds me of talking to a friend’s mother who was suffering from dementia - she could have a perfectly lucid conversation with you and then segue into stories that were obviously fictions that existed only within her head. She was a nice lady, but not someone who you would hire to represent your company; she was considered disabled.
Awhile back another commenter called them a “demented Clippy” which about sums them up for me.
Yeah totally. LLMs have a lot of awesome use cases. But as chatbots, they need a lot of guardrails, and even then, I'm highly skeptical if they improve the experience over a simple searchable FAQs or docs.
> If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place?
Because you can ask it a question in natural language and it will give you an answer you can type into a search engine to see if it's real. Before you didn't know the name of the thing you were looking for, now you do.
> If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?
The rate at which it makes stuff up isn't 99%, is the point. For common questions, better than half of the answers have some basis in reality.
Why is that a necessary requirement? Something can be useful without it being perfect.