I struggle to find non-evil applications of voice-cloning. Maybe listening to your dead relative's voice one more time? But those use-cases seems so niche to the overwhelming use this will likely have: misinformation, scamming, putting voice actors out of work.
The possibility to continue to sound like yourself after permanently losing your voice (e.g. from motor neurone syndrome) is one. Perhaps almost the only one.
At my workplace, a colleague in another team used an AI tool to voice/video clone my companies CEO, CRO and CTO (I assume with their permission) and created a mandatory 30 minute training video that they expected us to watch with these monotone fake company leaders doing the presentation. It wasn't even a joke.
Selling a voice profile for procedural/generated voice acting (similar to elevenlabs "voices") of a well-known person or pleasant sounding voice could be a legitimate use-case. But only iif actual consent is acquired first.
Given that rights about ones likeness (Personality rights) are somewhat defined there might be a legitimate usecase here. For example, a user might prefer a TTS with the voice of a familiar presenter from TV over a generic voice.
But it sounds exceedingly easy to abuse (similar to other generative AI applications) in order to exploit end-users (social engineering) and voice "providers" (exploitation of personality rights).
Eleven Labs pays the estate of the people's voices they use, correct?
I have their app on my phone and it will read articles in Burt Reynold's voice, Maya Angelou's voice & etc. I'm under the impression that they consented to this and their estate's are being compensated (hopefully).
If you're in the USA, your credit card company captures your biometric voiceprint without consent or even notification when you call customer service. This technology makes that pointless.
Last year I proudly said it was "two thousand and five" during a video take, and didn't notice it at the time. I was able to add the "twenty" using Descript.