I can't help but notice LLMs are producing voices that are each day more human-like and now we expect the uniqueness of human voices and what they're trying to communicate to just fade away.
At which point does it stop making sense to have a human at the other side? Just let frustrated humans talk to the LLMs. They aren't hurt by your frustrated voice.
There are still jobs that absolutely, positively require a human, and I fear very much for situations where I need a human to talk to to resolve an issue, but one doesn't exist.
As a particularly morbid example, most systems out there are not prepared, at all, for an account holder to be dead and other people need to resolve their matters. Navigating the more robust conversation trees to get to a human in those extreme edge-cases is a whole thing.
What about angry customers with reasonable demands? Can the AI identify that?