>Out of all the celebrities, Bourdain would have viscerally despised this.
I would just like to point out the irony in this comment. You're speaking on behalf of a dead person in objection to someone else speaking on behalf of that person.
We can debate the ethics of this whole thing without trying to lay claim to what Bourdain would have thought.
After reading his books, I would agree and was hoping someone had posted this. If anyone would vehemently oppose such manipulation, it would be Anthony Bourdain. Is there a disclaimer before this scene is shown? I'll choose not to watch such a documentary created by those with skewed ethics.
There does not seem to be any disclaimer regarding the manipulation, and the OP article questions whether the director would even have mentioned it had it not come up tangentially in the course of this interview.
As I understand it (I read this on Twitter so massive grain of salt etc.) the family concented to this explicitly. Would you feel better if it was a voice actor who could read Bourdain’s email in a perfect mimic? What about historical recreations, would you object to computers being involved in, say, recreating Abraham Lincoln’s voice? Does the age of the subject matter?
I think the squeamishness about AI as it becomes more and more capable will be interesting to define why we are feeling it. The machine is going to be capable of (and consequently used to do) these things whether you like it or not.
Any of those are fine - with disclosure. It seems pretty clear that in this case, they were less than transparent.
> “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville told the reviewer, Helen Rosner. “We can have a documentary-ethics panel about it later.”
...or we can have it sooner, on Twitter, and you'll get excoriated, and rightly so. I don't care if they train an AI to imitate Bourdain rapping the third verse from Modern Major General, it's a free country, but you have to be honest about it. You can't call yourself a documentarian and get cutesy about the authenticity of the material.
Call me cynical, but a documentary needs publicity to be successful. It seems to me this ethical dilemma gives just about the right amount of outrage for people to pay attention without it overly hurting the film maker's reputation.
Perhaps we'll see a mea culpa later on, where he will say he misjudged the amount of outrage, which is not a real apology, so lots of commenters will jump on that thereby providing even more publicity.
I just don't like how people will go to any length just for "publicity".
Creating an outrage for some publicity of a documentary just doesn't sit right with me. Yes, it is an effective step or
publicity stunt, but how does people don't have that inner voice? (I don't know what to call it) that tells them that this isn't what they should be doing without proper consent of the family members and without any disclosure over what is real and what isn't.
> She's not his widow, she's his estranged ex-wife.
That's incorrect. They were still married when he died and most of Bourdain's estate was left to their daughter with Ottavia overseeing it. "The family," in this case, is Ottavia and Ariadne Bourdain.
Fine. Estranged widow. The point is if anyone has control of his legacy, it's her, and the director claims to have consulted his widow which couldn't really be anyone other than her, which means one of them is lying about giving the go ahead to do the deepfake voiceover.
We don't know that anyone lied. The guy had two ex-wives and a girlfriend. Perhaps they didn't ask enough family members for someone's taste. But as others noted, an objection from one ex-wife doesn't really make that case very strongly.
Just about every Presidents’ Day I’ll hear car deals with voice actors pretending to be Clinton or Bush and also on TV people dressed up as Lincoln… so I dunno.
Or, more simply, is it an attempt at deceiving people into believing it’s the actual individual being impersonated?
Impersonating someone like Lincoln in a car commercial is one thing, but it’s another is someone presented an impersonation as their actual voice in a documentary.
It’s also complicated by the fact that the AI voice is trained on his likeness and dubious as if the reproduction is a new creation or something else. Similar to the debate about GitHub Copilot.
On which timescale does something become historical? Are the Beatles historical, to you maybe not but possibly to your children? Where is the line you’re drawing?
If I’m honest humans have never done the moral thinking, we largely just middle through and hope to get away with it.
Time doesn't matter as much as what is said. Fake Anthony Bourdain voice reading something he wrote seems fine to me, especially in the context of, say, a documentary.
John Lennon's voice doing an ad for Vox amps seems in poor taste.
>John Lennon's voice doing an ad for Vox amps seems in poor taste.
the Bourdain documentary we're talking about was created in order to generate profit; it has box-office showings and advertisement campaigns.
A documentary is nice, but I fail to see how it's less of a product than Vox amplifiers, aside from some shallow altruism like "We're spreading the ideas of Bourdain", which could be done entirely without a profit scheme.
I don’t understand, are you saying documentaries should be made to generate a loss? Do you think the same documentary would be better if it didn’t have advertising or box-office showings? I really don’t understand the logic here at all! Please explain rather than downvote like in the other comment where I ask about this.
>I don’t understand, are you saying documentaries should be made to generate a loss?
No, the 'for profit' nature of this venture is just the amoral cherry-on-top.
My real problems with this type of thing are many-fold :
1) Textual messages are often times created in such a way as to be convenient rather than to be read. Inflection and tone mostly go out the window when written out, so people tend to use lots of hyperbole and metaphor to better convey nuance like sarcasm -- this can quickly make someone who is eloquent in-person look moronic in-text, but people close to the individual know better and ignore the email parlance.
2) There is a level of artistry and creative interpretation when deciding how an AI voice generates text-to-speech. What samples are used, how is the phrase tempo'd, environmental interpretation, etc. The end product is very much not-Bourdain , but it's sold to people watching the documentary as Bourdain, so it's assumed that he said these things.. which leads to :
3) Personally I believe that the post-death 'image' of anyone should only be manipulated or altered by third-party testimonial and record-level evidence.
An unaware person watching this documentary would come out of the theater with the notion that Bourdain said things that he never said.
A third party is using Bourdain's own image to sell persona/image changes to the public on his own behalf, post-death -- I have major ethical and moral qualms with this idea, even with all the express permission of those around him and his family.
Regardless of who these people are that gave permission -- they're not him. The voice/image/actions of your own persona should probably never be tampered with by someone else -- even the closest people to him can't begin to know what he'd say internally, even if they may have felt that they had a good grasp.
Now, sure, we can always argue that a celebrities image is likely sold to the companies managing them -- and I would have to agree. Legally, sure, his image likely isn't his own property, but I don't have to agree with the methodology on an ethical level, and I have a real hard time imaging that I ever will.
>Do you think the same documentary would be better if it didn’t have advertising or box-office showings?
Yes. I am less against the manipulated and possibly perjured conjuration of fake dead-people's voices when someone isn't making a profit.
(I detected a bit of snark in your question, so I replied with a bit of snark -- but it's an honest answer. Apologies if you didn't mean for it to be interpreted that way -- another incongruity in text-based-talk :) )
Just one example of the prickliness of this with the for-profit aspect in scope : an AI generated likeness of a famous voice says "Eat Snickers!" , after the family of the original voice gives all the permissions in the world to whoever may want to use it.
Tons of royalties are accrued.
Who is given the royalties?
so, tl;dr, here's the point I feel most strongly about : An unaware person watching this documentary would come out of the theater with the notion that Bourdain said things that he never said, in a way that he perhaps never would have said them. This technology, along with others like it, creates a situation in which a third party can manipulate the persona of a long-dead person in a first-person way, thus circumventing a lot of the trust checks that exist with regards to the perceived image of the now-dead.
I don't think this particular example is damaging, even if I think it's unethical -- however I do think that it's an indicator of things to come, and I believe that it is going to be heavily abused until some regulatory body makes lots of rules and regulations regarding the depiction of image of those that are no longer able to fight their own libel cases.
Good point. I suppose I'd say I'm comfortable with some combination of appropriately compensating the heirs/estate and some sort of 'respect for the dead'.
The change of medium is significant IMO. Rarely in spoken word do we expect a citation of when/where the words are spoken (barring for example, famous speeches/addresses), but in written form it often is. In this case, Fake AB Voice was reading part of a bigger piece of text - but it's not obvious what that underlying text is, making further analysis of the context in which it's spoken a bit more difficult.
Idk if historical is based on time. I think intention is more important. if you recreate the Beatles purely for profit then it seems unethical and disrespectful. if you want to keep someone's voice for record keeping or something educational that seems different
Why do people in this thread seem so focused on profit? Everything has a need to make money or do you all work for free? Why is trying to make money on some hopefully valuable art you’ve produced bad?
I don’t think my family should have the ability to make decisions regarding my likeness. My likeness is one of those things which is just totally 100% mine. I married my wife and had kids with her and would want her to have all of my money if I died but wouldn’t want her to be able to license my voice and face to promote Wheaties cereal or something. It shouldn’t be allowed unless the person explicitly grants those rights while alive.
You get a copyright automatically on anything you write... why not on your likeness (insofar as it specifically refers to you)? The 1,000 Elvises are part of the problem: although Elvis chose to be an extremely public figure, I'm not sure he would want his likeness to essentially be a cartoon character for Vegas weddings and the like. I would say that a person's likeness should be off-limits to everyone, unless specifically authorized while alive, for the duration of the copyright term (70 years after death), and then become public domain (family members should not be monetizing their relative's likeness after their death, unless specifically authorized to do so).
Existing law on likeness already deals with this. If you're the identical twin of a celebrity, you're allowed to use your likeness as it relates to you (i.e. use your photograph to promote your real estate business). You're not allowed to use it to suggest the celebrity in question.
That was the Sicily episode- Bourdain wanted to show the ugly, scammy underbelly of Sicily I think. This kind of thing is there in a lot of the Mediterranean, but it unfairly makes all of Sicily look bad. I had a bad impression of the whole island until I learned more about it years later. Sicily is mostly nice with a few pockets of unsavoriness like anywhere, not a mafia island. In my opinion too many episodes are like that- projection of a certain fantasy about a place while ostensibly engaging with the real authentic experience.
Ofter you see what you want to see in the world. As much as I adore Bourdain, I have to admit he was undoubtably jaded and there were trips where that colored everything. For many, depression comes in waves and lasts for periods of time. He showed a lot of the good and bad in these places, but everywhere has a bit of both.
Bourdain put out an incredible body of work overcoming a lot of lifelong issues and I still love his show. Its just recently seeing someone like Rick Stein in the Mediterranean I realized its completely possible to do an area justice while also highlighting its best features. I feel like Anthony Bourdain with his rebel image would worry about being seen as a sellout or too bourgeois if he went to a fancy restaurant in Sicily (unless he went ironically and didn't enjoy it). I question my perception of some countries I know only/mostly from Parts Unknown now.
He did go to some really fancy places in some episodes, the kind with one-of-a-kind $300 appetizers and multi-thousand dollar wines with their own climate-controlled storage facility.
But he also went to Waffle House and enjoyed it. And hiked through the jungle to eat and drink (mostly drink) at a local festival and sleep in a hallway with the rest of the locals.
That was what really impressed a lot of us - he seemed to get along with everyone and to value and accept their differences. The food often seemed to be largely an excuse to explore and show that message.
Just quick context on the squid bit: it wasn’t Tony’s crew (zero point zero productions), it was the combination of local fixers/boat folks who wanted to make it look like they caught fish.
Funny enough, him describing the ploy while sort of mocking it as it happened led to an enjoyable moment in that episode (imo). It showcased his humility, and ability to make the most out of a situation while not taking himself too seriously.
Thing is, more I rewatch his shows, all of his comments about depression and anger have taken a really somber tone now given what we know. I wish someone was there with him to help him through through that time.
His death really affected me in a deep way I didn’t expect. I miss him so much.
One of the other big scenes like that is up-river in SE Asia, talking about "how he wished that he could say it would be difficult [to kill a pig with a spear], but that time and distance have hardened the person he once was".
That's one of my favorite scenes. He is so angry drunk and it's his birthday and you can tell he doesn't like the chef at all (the chef is the same guy who took them out "fishing" earlier in the day).
I enjoyed the running gag (based on many episodes) of Bourdain going on a local fishing/hunting trip and failing to catch anything, and then having to scramble to some other local place and make up for the lack of a catch which some times were some of the highlights of the episode. And then on rare occasions when they did catch their prey, it made for a nice juxtaposition.
Refusing to disclose which dialogue the AI generated serves no one but the documentary maker himself. He’s not protecting a source or doing anything commendable. It doesn’t make the documentary better on any way.
This is not axiomatic truth. I reviewed Laurie Woolever's first posthumous book about/with Bourdain, check out how the 'bone luge' was invented for one of his shows: https://niklasblog.com/?p=24870 Also, as is clearly attested by many of Bourdain's friends in Woolever's coming oral-history book on Bourdain, he lied about things, as most people do. He claimed to be human and not perfect.
Important to point out it wasn’t the crew faking the scene. The guy that took him fishing had a friend throwing squid in, unashamedly. They showed the guy on camera. There was no attempt at trickery from the people producing the show.
As a viewer of the documentary, I will love that effect instead of a bland voice-over.
But a note should be added on the screen that the voice is AI-generated. Like when they say a war video is a reenactment.
Why would it have to be a bland voice-over? It's an email he sent to a friend. Have the friend read it. I imagine the friend would become emotional. That's far more riveting than a computer recreation, no?
Agreed, this whole issue could be solved with a message before the film runs about how the voiceover is created, and then a "recreation" label during that scene.
I don't think so. There are still other ethical considerations. Bourdain likely would not have liked this, as others in the thread point out. So it is weird to honor someone with a documentary but not honor their wishes
What makes those considerations significant enough to be worth the effort of evaluating though? It's too late to stop the tech from existing, there's no individuals actually being harmed, and even if you find a framework for arguing harm that's compelling the end-game is just going to be updated contracts which demand rights to use the performer's likeness for these sorts of purposes. The dialogue just seems like a lot of opining for the sake of itself with a fashionable hint of 21st century doom-cult luddism. I guess maybe the unions might have a reason to worry but I don't have a lot of sympathy for unions representing millionaires.
I have no qualms with the tech existing, assuming I do is you creating an argument I don't have. In fact I work in ML generative modeling. But just because a tool exists doesn't mean you shouldn't be thoughtful about how you use it. Technology isn't really good or bad on its own.
> there's no individuals actually being harmed
This is debatable. I think a lot matters on how you think of dead people. I believe most people would take the stance that it is unethical to use someone's image to promote things they did not stand for. You're right that it is very difficult to impossible to harm a dead person, but there's still respect. There's also a family that lives with how you portray said person. They _can_ be harmed.
> the end-game is just going to be updated contracts which demand rights to use the performer's likeness for these sorts of purposes
If there's (informed) consent then what's the issue? (If we assume that said person knows what they have consented to)
Forgive me if I'm wrong, but your response seems in bad faith and to operate on many assumptions about me that just do not hold true. Frankly this is not the kind of conversations I want to have on HN or expect to have. I'm happy to argue about the ethics and have differing opinions, but I do not appreciate my image being depicted with a broad brush. As long as the conversation is in good faith we can discuss a lot of controversial things and disagree all day and everything will be fine. But turning
> Bourdain likely would not have liked this
into
> a fashionable hint of 21st century doom-cult luddism
Is just ridiculous. That's far too large of a leap and such comments are not welcome here. You're welcome to try again but you'll need to update your priors.
I think that is a fair question to ask but I don't agree with your conclusions.
You are essentially asking why bother investigating and forming opinions and value judgements on the the use of new technology. Humans use opinions and values to decide how they want to interact with the outside world. Even if the technology exists and no living are harmed, people can find it distasteful. people can decide not to support what they find distasteful. They can decide to voice support actors unions if the issue ever comes up. They can tell their friends and family how they would like their likeness used as these technologies become more prevalent.
I have a machine with two GPUs and a frozen OS with a tensorflow python app that clones voices, and I'd say the quality passes if you run it through a phone bandpass filter.
I've had an idea to use propellerhead recycle to chop the output cloned voice into syllables, and then "play" the chopped parts in rhythm, through autotune.
The issue is you get Eifel 65 sounding autotune if your base vocals are monotonic or way off key. The only way I can think of fixing this is to use something like audacity's pitch changer that doesn't affect the speed of the sample - rough the lyrical tones in with audacity/recycle, then autotune it where it needs to go.
I'd like to say I'm too busy to get this workflow going, but mostly I'm lazy and someone else will do it first - and better - I can't improve the AI cloning software.
I've had a new version of this in the works for a few months. It'll launch maybe this weekend? It supports user-trained data sets, which should be pretty neat.
Your approach sounds kind of like unit selection or vocaloid. Do it!! :)
Your gilbert gottfried is better than mine - in that it's more obviously gilbert, however, when he's doing interviews and not "in character" the one i have sounds closer to his actual voice.[0][1]
I use(d) "Real time voice cloning" available on github. I have used your site before, i think that's actually the reason i wanted to find software to do it myself originally!
I ended up using mine to do the announcements on all of the ham repeaters in Central Louisiana. I'm not sure how many are still active, but i had a couple of podcasters and two presidents doing the announcements for a while.[2]
Nice work, wrapping it all in a website/service. I'll bookmark it!
I'd love to hear more about how this is done, but I've never done a thing with any of this other than trying to predict the future and seeing this as something that will take over eventually. I'm Natsu#7152 on Discord.
How much time of audio or data do you need on average to train up something? I guess I'm wondering if I only had a few minutes of someone speaking, would that be enough?
The implementation i am using works best with several <30 second clips. I've tested it by cutting up 20 minute interviews to only have the person i care about, and it seems to be about the same as a half dozen 15 second clips.
You should watch The Doors movie by Oliver Stone starring Val Kilmer. Much of the music, including vocals, was redone with Val singing. It can be quite unsettling at times
We're running up towards "Thou shalt not make a machine in the likeness of a human mind." and it is appropriate.
It isn't wrong to replace a human with a machine, I'm not going after calculators, spreadsheets, or farm machinery.
What is to come though is a moral imperative that making machines mimic people is deeply wrong.
Our worldviews, values, and judgements are based on interacting with other humans and to have that set of long-tuned intelligence modeling of minds around us subverted by a computer model will bring out strong objections and eventually violence.
The science fiction writers had it wrong, the first conflict with machines isn't going to happen with an advanced artificial intelligence, it is going to be against naive misbehavior and trickery much like this, especially when the artifice comes into the real world (first iteration: self-driving cars). Semi-religious anti-machine fanatics will be the first wave with terrorist style attacks on machines ... that is if the sane moderates don't do enough to rein in the coming of the artificials first. Wait for a smoking gun event like a car driving itself into a gathering of schoolchildren to get things started.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Frank Herbert (Dune)
This here. Machines can only become the enemy when they gain sentience, form a civilization, and declare independence.
Until then, it's not the machines that are the problem. It's the people behind the machines, people making business decisions about operations and deployment of the machines.
> Wait for a smoking gun event like a car driving itself into a gathering of schoolchildren to get things started.
The actual gun industry has somehow managed to avoid being sunk by this problem by keeping the blame on humans despite a school shooting every few months in the US.
Cars will be explicitly advertised that you can turn the automation on, go to sleep, and arrive at the destination, but if the automation hits anything the sleeping human will remain liable. The misleading advert will not.
It should be covered under "right to publicity" laws [0]. TFA touched on this. These are the laws that prevent people from capitalizing on celebrities' names or likenesses without permission. This is applied even if the person is dead. Impressions of a person's voice generated from actual recordings of the voice is new ground, but it reasonably falls under "likeness."
There are two noteworthy limitations here:
1. The laws require a plaintiff to sue. If Bourdain left control of his public likeness to someone (like a family member or agent), they would have standing. I have no idea if anyone receives them by default if he died without explicitly granting them.
2. You don't need permission for using a person's name or likeness in relevant news or commentary. So this documentary might be protected.
IANAL, so if anyone knows relevant case law, please share.
And, of course, these are just the laws we have now. Whether we need new laws is iffy to me. I can just dislike this documentary using a generated voice because it's deceptive. Bourdain never recorded those words, but people watching the documentary could easily believe otherwise. It's not like TFA's example of a Civil War soldier reading a letter. I know he died before quality voice recording. And many documentaries label readings of modern documents with "Dramatic reading by actor" or similar.
That seems like an impossible thing to do - how would you define "your voice" and how much would it have to be changed to no longer be considered "your voice"?
What would happen between two people that sound similar, or even identical, to our imperfect human senses?
Court precedence. Like pretty much everything else judges would have to define the lines over decades through court cases.
I'm not sure if it should go that way. I don't like the idea that someone like Disney could hire a voice actor once and then steal their voice forever. And it feels like we are headed down that path.
On the flip-side if you could trademark your voice, someone like Disney could buy up a bunch of trademarks and use them to sue somewhat similar voices. Kinda feels like a lose lose scenario.
>someone like Disney could buy up a bunch of trademarks and use them to sue somewhat similar voices. Kinda feels like a lose lose scenario.
That's only if you're using trademarked voices to begin with though, right?
The reason I think trademark needs to exists is because with voice, like an image or wording, comes a value or expectation of value that can be invested into it. Like Bourdain's voice.
I get why this is newsworthy, but I don't get why it's an ethical problem. How is it any different from hiring a really good Bourdain impersonator to read the email? Lots of celebrities have pitch-perfect impersonators; is this a thing we worried about when it wasn't AI doing the impersonation?
I think the question is about a "documentary" using synthesized voices presented as if they are recordings. You'd have the same issue with impersonation by a human; it is presenting one thing as something it is not.
Getting away from the specifics of this case, cheap impersonation of arbitrary individuals getting easier raises lots of questions... all those TV spy shows "voice print cloning" stuff becoming more real will make a lot of things involving non-face-to-face trust more interesting.
Has it always been considered an ethical violation to have an impersonator read something a celebrity wrote? This can't be the first time that's happened.
I've no doubt that it's happened before... Maybe what's really changed here is that we no longer need to convince the human impersonator that this is an acceptable thing to do, because we can just have a machine do it.
I think the concerns are 1, this is passed off without real buy in from next of kin for a person who would be against it if he were alive. And 2, I think the attention and concern is heightened because this shows it can be done at scale to anyone with ease instead of having very talented people individually read off scripts into a mic.
Consider getting a voicemail from your dead grandmother tomorrow. Public figure or not, we need to have a line somewhere where it’s no longer okay to scrape a profit from every little thing we can think of. Can we let dead guys just be dead?
I think this is likely the reason it was done too.
The snide remarks from the documentary maker; “We can have a documentary-ethics panel about it later.” make it seem like he is trying to get a fire burning around this.
My brother and I had a running joke that Tom Araya from Slayer died in the mid 90s, based on his ashen appearance in the Divine Intervention sleeve. Our “theory” was all of his vocals after then were bits of previous recordings reassembled into new songs. It sounded technically possible with manual editing back then, but it has already gone so far as to automatable.
In this case, for a documentary, I see how that’s a defensible used case. They could hire an actor to read the email, imitating Bourdain’s voice as closely as possible, to the same effect. Other uses certainly could be problematic. Discussing them and working out legal and cultural rules is very relevant right now. We’ve already had posthumous celebrity event appearances over the past couple of years. Famous actors could be in movies with needing to be involved very much - mainly an IP license. I have no doubt record companies would love to create new music with all the artists of the 60s-80s, dead or alive.
There are ethical concerns but I think we'll just learn to live with it.
Forging signatures are easy and have been around forever but signatures are still used for very important purposes (even electronically now!).
It may take a while for the legal system to catch up and using voice recordings as evidence may be tricker than before.
But on a more exciting front, I think synthetic voice can be made like fonts.
Celebrities and voice actors will be able to sell their synthetic voice like how fonts are sold today.
You'll be able to change the voice of your Alexa, Siri to a voice you downloaded from a marketplace.
Netflix may even let you select the narrator's voice for your favorite documentary.
Hundreds of years later, people may be watching a documentary in Morgan Freeman's voice while having no idea that he was actually a famous actor in the past.
Signatures used for very important purposes are only decoration. They must be used in conjunction with something else (a notary, an independent exchange that confirms that the signature is actually binding, etc.).
The electronic signature is completely different: there is no visible artefact anymore, but a process that seals a document and (under appropriate legislation), certifies that the signer is who he is. A visual object is sometimes added for aesthetic purposes.
What you listed are just ways we've learned to live with the forgeable nature of signatures. Now voice recordings (and even videos) will have a similar fate in the future.
I think Neville went by a common motto: "All publicity is good publicity."
Because he didn't need to do it. Using a voice actor would have been completely standard and they would get a credit at the end, which I'm sure they'd appreciate.
I don’t see the big deal. When watching, let’s say, a Ken Burns documentary about The National Parks, they’ll have a guy who sounds as much like John Muir as possible, reading letters he wrote. They don’t add a single footnote or disclaimer anywhere in the film that it’s not John Muir speaking. You’re just supposed to know it’s someone impersonating what they think he sounded like. If it was AI instead of a voiceover actor, that would be the same thing to me, within the context of a documentary, where you’re trying to express that the guy that is communicating is the guy who’s voice you’re supposed to be listening to.
Well anyone in that Ken Burns doc wouldn’t have immediate family or friends listening to it, and not to mention it’d obviously not be an original voice recording.
I don't get why this is controversial. Documentaries constantly have someone reading a written document while imitating the voice of the author, and none of them come out and say "This is not the actual voice of Abraham Lincoln". Why is having a computer imitate someone's voice different from having a person do it?
It's pretty obvious why you will never hear any recordings of Lincoln's voice. In comparison, it most certainly could be Bourdain's voice here so it's not at all obvious that it isn't.
Is the issue, imitation without consent or that viewers are not informed correctly?
I thought that given is ex-wife had not said okay and given Bourdain's past actions he may not have agreed with something like this using this technique was a poor choice, not necessarily that viewers are shown generated instead of recorded content
I think with correct context and or the correct say so this would be fine. But:
I didn't really follow Mr.Bourdain at all, but given the clarity from other people who did, and believe he would have hated this; then it's not the act of making a voice clone that's unethical it's doing it outside of the wishes, or what would be the perceived wishes of that person.
In other words this is unethical because had they gone to ask permission from the estate/whoever could make that call now that Mr.Bourdain is no longer with us they would have likely gotten a no.
It makes it particularly egregious that if these people were so interested in making this clone of his voice that they likely would have followed him closely enough to know he wouldn't have agreed to this if he were alive.
I guess you can boil my position down to without permission, explicit or implied, it's unethical and not only that, legally I'd put it under impersonation.
Personally I think this is interesting, ethics be damned. It is obvious the direction we are going with these kind of technologies. We will be able to create entirely new content using the personalities of deceased people out of whole cloth.
Some day we may be reduced to nothing more than a pile of ashes and a USB stick with our personality reincarnated into it. That's far better than simply being a pile of ashes, and may be the closest we get to some kind of immortality, a way to keep our ego wandering the digital world long after we've died and gone to wherever you choose to believe.
Combined with advanced AR technology, all it may take is putting on some glasses to see the ghosts of your ancestors wandering the real world and interacting with it. Well, maybe not our ancestors. We will be the ancestors, and our descendants will be the ones watching us, tending to our digital souls.
Very unfortunate. They could easily have done something which I think would have been more touching like have his friends, family, or lovers read the text, instead. I haven't seen the documentary (not sure I will now), but I feel like that would have been more emotive.
We already DO have this same controversy about edited photo and video so there is no difference: most of the people who think this is wrong would also agree that creating a fake photo/video would be wrong.
Weren't these type of legalities worked out from when they copied Cripin Glover's face for the actor who played Marty Senior in Back to the Future 2 i.e the “right of publicity”?
A deepfake audio + video with mask of the mayor of ethekwini was used to encourage people to continue looting during recent riots in KwaZulu-Natal, South Africa.
South African media and public consciousness has not grown to accommodate the "deepfake" idea yet. It's no surprise Google searches for deepfake come up empty.
I've consumed lots of random media from many WhatsApp and Telegram groups. I've lost track of lots of it in the stream of information. That stuff doesn't get indexed on Google. The media narrative is tightly controlled, ministers claimed things were under control on TV whilst us on the ground knew this was false. Zello has also been extremely useful for community security. But trust has been an issue because of the open nature of these platforms.
I've been trying for this whole week to come up with tech solutions to this, but there are also social issues technology alone cannot overcome. Eg: Community members organised in adhoc security patrol groups engaged in friendly fire because they did not know they were friendlies. A telegram bot to query registration plates makes sense...but that leads us back to apartheidesque access control (no!) and the issue of trusting the entity that controls the db.
To be fair, that doesn’t make it untrue. It just means a source is required - the parent may be a primary source, heh, but that has to be established.
(I agree with you, I just have been corrected in the past with actual on-ground knowledge that hadn’t been reported yet - don’t know if that’s the case here :))
Arguably Sassy Justice is worse, Donald Trump is alive, and I don't think Sassy Justice was an attack, but it wan't flattering either. Sassy Justice was not audio, somewhat clear satire but no warnings.
It reminds me when Boko Haram kidnapped the 200 Chibok schoolgirls, they killed and kidnapped 1000's of school boys first. You don't have to like Trump or care about boys but it can come back to bite you.
I just hope synthetic media in docos is pointing out to people doco's aren't the truth tellers people think they are. They are entertainment not purveyors of truth.
I actually think getting deep fake stuff to the public is the way to go. Basically anyone can create a deep fake. It is far better to get the public used to the idea of deep fakes so they can develop some immunity to it.
I'm not sure that when you go down that rabbit hole folks will believe much at all...
I don't want to go down the rabbit hole here but everyone hears about 'fake news', they know what it is, they just label things they don't like 'fake news'. I'm not sure that's helping.
We are there already with "fact checks". When someone sees a "fact check" pop up on something they post on Facebook, they just say "See? I knew I was right"
We've had Photoshop for 3 decades and airbrushing for even longer and it's had far-reaching impacts across society... no one is really immune to it because it isn't obvious and it has arguably poisoned realistic ideals around body image.
IMO we need to mandate disclosures around deepfakes. It's impossible on a peer-to-peer level, but commercially it should be clearly disclosed.
We don't get immunity to photoshop in the "I can tell from some of the pixels" sense, but we gain immunity in the sense that we no longer accept every image as representing absolute truth. Plenty of people today still think that video manipulation more sophisticated than an instagram filter takes a Hollywood budget or a lot of expertise.
Once people get more experience seeing and creating deepfakes on their own they'll be less trusting of random videos they see in the future
I think your hypothesis is defeated by how people take screenshots of text (captions, headlines) at face value on Reddit, Twitter, and other social media. You can scroll through all the comments and you won’t find a single person asking for a source, just people reacting to some text.
If people don’t have an immunity to a screenshot of text that claims something, how are they going to have an immunity to Photoshop and deep fakes?
You don't even need a screenshot for that. Any text online, even just a comment will be taken at face value by a certain percentage of the population. The general gullibility of people or their susceptibility to disinformation is always going to be an issue. Even for people like that who just don't question things they see (especially when they want to believe them) I'd bet if you pressed them they'd admit to being aware of photoshop and of just how easy it is to fake a screenshot of text.
Are there any studies on "I can tell from some of the pixels"? Over the years I keep coming across both people (and myself) on internets being correct on 'shopped photos while masses belivied in. Is this just down to familiarity with software and a health bit of skepticism?
I think it's the healthy skepticism that's more important than being able to detect some minute detail in an edit. For a while photos were considered strong proof, not so much today.
There has been research on detecting photo manipulation and there's plenty of things people can watch out for if they're trying to "prove" an image was altered, but a lot of edits I see are so bad/obvious that the people making them aren't really trying to "fool" anyone with them. They just think it makes the photo look better. They'll do things like jack up color saturation to impossible/unnatural levels, or remove every pore from their skin, etc.
we gain immunity in the sense that we no longer accept every image as representing absolute truth
GPs whole point about body image is that, as a society, we don't have that immunity. Basically no one has a healthy ideal of what a good body image is because we never even get to see them.
The fact that media can so effectively influence our view of an ideal body has little to do with our awareness about photo manipulation. We know full well photos are touched up and fake. In fact we're at the point now where millions of people are so aware of it that they are routinely editing their own photos to match whatever the ideal being pushed at us in the moment is. Maybe that's "instagram face" or giant asses, hell I remember when it was heroin chic. The point is that we all know it's fake. It just doesn't matter because media tells us what to like/want regardless.
I disagree! photo manipulation plays a starring role in how pervasive and insidious these ideas are... I can turn off the TV and avoid movies, but I can't avoid ads, billboards, grocery stores, direct mail, you name it... it's everywhere.
I live alone, and if I go into town there's a good chance I'll see more doctored photos of people than I see actual people (depending on how busy things are, which is usually not very).
>we no longer accept every image as representing absolute truth
That's not true for many people (I'd go as far to say most), and we're so inundated with it that unedited media is the minority. Ad campaigns get PR for being "unedited" and even then they're heavily art directed (casting, lighting, styling, etc) to compensate.
The effects are so widespread that they're subliminal, even if you're conscious of the scope that they occur. Billboards, tv, movies, newspaper, magazines, products on shelves, menus at restaurants, wedding photos, family christmas postcards... it's inescapable.
Even if you're some paragon of mindfulness and truth in image editing and can somehow isolate yourself from its influence, you're still subject to it because of how it impacts the way everyone else behaves and sees the world.
Just last month, Norway made changes to it's Advertisement Act, that states:
"The advertiser, and the person designing the advertisement, shall further ensure that the advertisement where a body's shape, size or skin has been changed by retouching or other manipulation, shall be marked."
photo manipulation is pervasive, but I don't take that as evidence that it is actually being believed as truth. I'm not really sure what the subliminal effects are. If you see an ad for a fast food burger on TV it might successfully make you hungry, and make you want to go to that restaurant, but nobody really expects the food they get will look anything like the food in the ad did.
> but nobody really expects the food they get will look anything like the food in the ad did
Have you ever worked at a restaurant? It's not as unusual as you think. A lot of us on HN are in bubbles of savvy people because of our tech-related professions, and most people are NOT savvy. Many people never consciously think about the images they're subjected to.
Wouldn't it be a problem for the restaurant if people were contently sending food back or being disappointed by the product because it didn't look like the ad? I mean, the difference is striking! (see https://i.imgur.com/e9EaVbu.jpeg). If I genuinely expected the first burger in that image and got served the second one I'd demand my money back and maybe never step foot in that restaurant again! Wouldn't most people? It seems more likely to me that most people accept that the first burger is a fantasy.
I worked restaurants through college and I've probably had food thrown at me over a dozen times because it didn't look like the menu!
It's not the norm, but it's not completely unusual. I've had many more people complain about the disparity in less severe terms. It's a very weird world out there, and if I've learned anything it's that I'm incredibly lucky to have any amount of self-awareness because a lot of people are running around out there on pure id... unaware of just about anything. If you're at all skeptical about anything you're ahead of the curve.
I don’t see this as any different than getting an actor to play the part. This happens all of the time in docos for a more immersive experience. JFK voice comes to mind quite a bit as an example. Is it unethical to get an actor to play his voice?
The major difference I see is that the viewer is aware that they are watching an actor. Or at least, they should be aware. It has always been standard practice to note when something being shown is a reenactment of an event, and not the actual one.
Even if they used a voice actor to mimic Bourdain, they should have informed the audience that an actor was reading the script.
He hated fake phoney people, or food.
He hated when his production crew would stovepipe his bits.
He hated a scene in Greese (I believe) where crew went out and bought squid, and threw them into the water to made the shoot better.
I really liked Bourdain. He was one of the few celebrities that didn't seem to change with fame, or money.
I watched him for years, and knew he was unhappy, but never thought suicide unhappy.