Hacker News new | past | comments | ask | show | jobs | submit login

So based on that human judgment, no "similar" voice to a public figure can be used in a product/service/demo/ad etc. without that person's consent. What's next? Banning similar people from appearing in demos? Banning particular musical notes from being present in songs because someone else played that note in a song?

As long as it's not literally her voice (or a model trained directly on her voice) they should have rights to do anything they want.




Again, you’re looking for some sort of absolute that either allows or prohibits something in all cases based on a set of ironclad rules, and that doesn’t exist because intent matters in human endeavor.

If a similar voice was used because it was _reminiscent of_ “Her” but was still a unique performance, then it probably wouldn’t be a problem. Where there’s a real problem is if it’s an intentional mimicry of a specific person’s specific vocal performance.

And Johanssen can show OpenAI had such intent in court pretty easily, between their attempts to get her consent to use her voice (which she denied) as well as their CEO’s “her” tweet promoting the voice.


And the intent is a tone, not particularly someone's direct voice.

That person's (Scarlet) voice just happens to be a great match for the intended tone and since they refused, they went to the next candidate.


It’s extremely clear that their intent was to use her voice, not just a gestalt or tone, given that they approached her multiple times—including after they’d actually implemented it using another voice actor.

Do you really think that everything exists in a a vacuum and must be decided from first principles, giving the maximum benefit of the doubt to one party over another? There is extensive case law on exactly this situation, and OpenAI has run afoul of it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: