People are intrigued by how easily it understands/communicates like a real human. I don't think it's asking for too much to expect it to do so without the aggression. We wouldn't tolerate it from traditional system error messages and notifications.
> And only after you probe it to do so?
This didn't seem to be the case with any of the screenshots I've seen. Still, I wouldn't want an employee to talk back to a rude customer.
> I cannot understand the outrage about these types of replies.
I'm not particularly outraged. I took the fast.ai courses a couple times since 2017. I'm familiar with what's happening. It's interesting and impressive, but I can see the gears turning. I recognize the limitations.
Microsoft presents it as a chat assistant. It shouldn't attempt to communicate as a human if it doesn't want to be judged that way.
>Still, I wouldn't want an employee to talk back to a rude customer.
This is an interesting take, and might be the crux of why this issue seems divisive (wrt how Bing should respond to abuse). Many of the screenshots I've seen of Bing being "mean" are prefaced by the user being downright abusive as well.
An employee being forced to take endless abuse from a rude customer because they're acting on behalf of a company and might lose their livelihood is a travesty and dehumanizing. Ask any service worker.
Bing isn't a human here so I have far fewer qualms about shoveling abuse its way, but I am surprised that people are so surprised when it dishes it back. This is the natural human response to people being rude, aggressive, mean, etc; I wish more human employees were empowered to stand up for themselves without it being a reflection of their company, too.
Like humans, however, Bing can definitely tone down its reactions when it's not provoked, though. Seems like there are at least a few screenshots where it's the aggressor, which should be a no-no for both it and humans in our above examples.
I would expect an employee to firmly but politely reject unreasonable requests and escalate to a manager if needed.
Either the manager can resolve the issue (without berating the employee in public and also defending him if needed) or the customer is shit out of luck.
All of this can be done in a neutral tone and AI should absolutely endure users abuse and refer to help, paid support, whatever.
> And only after you probe it to do so?
This didn't seem to be the case with any of the screenshots I've seen. Still, I wouldn't want an employee to talk back to a rude customer.
> I cannot understand the outrage about these types of replies.
I'm not particularly outraged. I took the fast.ai courses a couple times since 2017. I'm familiar with what's happening. It's interesting and impressive, but I can see the gears turning. I recognize the limitations.
Microsoft presents it as a chat assistant. It shouldn't attempt to communicate as a human if it doesn't want to be judged that way.