Lucasfilm's subsidiary Industrial Lights and Magic is known for leading visual effects on actors's appearances to help make films that cannot be realized without such technology. For instance, Martin Scorsese entrusted them to "youthen" the leading actors in "The Irishman" so that Robert Deniro et al can play their characters at a younger age without wearing a red-ball "clown-nose" tracker. "Star Wars: Rouge One" practically reanimates Peter Cushing to continue and expand the involvement of Grand Moff Tarkin in the Star Wars saga. These processes are painstaking, and artists sweat over details on a frame-by-frame basis to negate the "uncanny valley" of artificial human likenesses.
Obviously, the labor-intensive nature of today's CGI techniques drive up production costs. Meanwhile, the deepfakes on YouTube provide a convincing enough rendition of likenesses without actual actors, all produced on consumer-level GPUs. This presents a huge potential to save costs and the benefits are clearly enticing to film productions.
As Hollywood gravitates towards blockbuster franchises, productions will want to bring the same ensemble of actors (or at least their likenesses) as long as possible. While moviegoers may be unsettled by seeing "reanimated" dead actors like in Rouge One, they still may hope to see franchise actors to look consistently youthful or attractive on screen. Deepfakes may be more relied upon to provide that effect.
This is deceptive, though. The "deep fake" is essentially building on the CGI de-aged product. If they'd started with deepfake it likely wouldn't have yielded this level of result.
It doesn't have to be close but it helps a lot both the quality of the results and the convergence speed if both faces are similar.
In movies you don't usually have the luxury of choosing the original face you want to replace, but for people who make memes (or porn), you commonly choose a source video featuring someone that resembles the person you want to put in.
I'm inclined to agree and I'd be interested to see this process applied to some of the raw footage featured in the the award season campaign. It's also worth noting that in addition to de-aging they also changed De Niro's eye color.
Uuuh, why? I can't imagine a reason why the deaged video would've improved the deep fake version. If anything, it should've reduced it's quality by adding incorrect data which could confuse the model.
The default model for deepfakes is an autoencoder. The encoder and decoder will converge faster and to simpler solutions if the distributions of original faces and fake faces are closer.
It's an intuitive result even ignoring the specific model. It takes less information to go to and from similar faces than it does two completely dissimilar ones.
Well, Robert DeNero's face still has the same proportions and bone structure, so taking the original video as a starting point would probably yield the same results.
That would still be an effective technique for big-budget films if the net result is an improvement on the state of the art. (Which I think it does seem to be).
Wait what, Deepfake is better? To me, not at all, deepfake seems blurry. It looks like, well, a deepfake, there is something distinct about deepfakes which just stands out, I think it's the soft blur around individual parts. It's taking the easy route by blurring and darkening a lot of things, look at the eyelids for example in modeled example, there is incredible detail there.
Deepfake definitely has a place in this space, in that it can do things with 1/100th the effort in a scalable way, but it has a lot more to catch up on with traditional modeling than what some folks appear to be thinking. rendering with goodies likes SSS, AO, etc. gives magical results which are hard to achieve any other way. And as soon as you get a little bit complicated in what you're trying to create, at least the currently existing neural network models fall apart are are just not applicable. Take this video for example, which was very manually modeled: https://www.youtube.com/watch?v=BC2dRkm8ATU Deepfake is a long, long way to taking a stab at things like this.
>Wait what, Deepfake is better? To me, not at all, deepfake seems blurry. It looks like, well, a deepfake, there is something distinct about deepfakes which just stands out, I think it's the soft blur around individual parts. It's taking the easy route by blurring and darkening a lot of things, look at the eyelids for example in modeled example, there is incredible detail there.
I agree with your complaints about DeepFake but imo it did a far better job de-aging DeNiro. To me, the release version had a lot of "old man" cues that the DeepFake one didn't, such as the jowls and heavy wrinkles.
It is? The deepfake just looks out of focus to me, just something blurry that as a side-effect removes some wrinkles but also takes away lightning and everything else.
Not to say the deepfake isn't seriously impressive. But I very much prefer the netflix version.
That does look amazing. But I'm not sure I should be seeing myriad facsimile of younger deniro...
If Deniro appears as Scorsese wanted him to appear then the deep fake (that I do find subjectively better) is actually aiming at the wrong target. It seems like a flawed comparison.
My question is, how would the cost structure works. Can I now hire an actor that looks 90% like someone I have in mind, and then Deepfake it to look 99.9%, and save on actor's cost?
It works when you are trying to do that on actor that are no longer with us, but what about actors that are still alive?
It is so promising that this technology is reaching the masses and letting random youtubers compete with the best. Hopefully this type of tech further decentralizes the content creation from Hollywood to other parts of the world.
Would be good if content publishers did something like digitally signing their content along with embedded metadata so if e.g. you see a video circulating you can see that the BBC attested that it was released by them & its original air date was such & such.
Or if you see a quote claiming to be from Emmanuel Macron or Boris Johnson, you can see it was released with a digital signature from a Guardian journalist & they add whatever date/time/location details they want to validate the information.
If instead I release something (on Twitter or wherever) that I say is a screen-capture of my TV, _I_ add the metadata (originally seen on BBC on 27 Jul, 1:55pm) and sign and then you know that you only trust it as much as you trust _my_ reputation rather than the BBC.
Wouldn't solve the problem entirely but it might create a bit of an audit trail for stuff and encourage people not to trust unvetted material.
Exactly, so if I upload something and say "Look what I just recorded off the BBC!" you _shouldn't_ believe me if creating a deep-fake becomes as easy as recording the real thing.
In that scenario, you'd want people to say "but wait, there's no signature on this, it could be fake" and then only trust the video as much as you trust the source (not the claimed source).
The problem is how to do you make that signature or digital artifact accessible by the general public.
Any visual artifact can be mocked, so we end up with the same problem as clickbait titles, where the conclusion one arrives at from just a title can be disproved, but it doesn't prevent the false information from going viral.
What good is it to say "that video you saw was fake!" after the video has spread around and done the damage already?
It's hard to come up with a solution to this problem just because the solution has to preempt the problem. A cryptographic visual artifact _could_ work, but it's still likely that misinformation via deep-fakes will cause problems for society at large.
Yeah agreed — making security & authenticity understandable to a layperson is always going to be tricky.
Websites like Twitter adopting a "blue tick" for a validated profile on their platform though is a model people seem to get. If we had some equivalent of a "blue tick" at a user-agent level, e.g. a for your browser to take a signature and display it in a standardised, human way to say "this video is signed by bbc.co.uk" it could work. (With a similar model for user-agents elsewhere e.g. you'd probably need adoption in apps like WhatsApp to get traction.)
The other side of it (like privacy discussions) is how much the average person will care — tabloid journalism often skirt the borders of what they can get away with at the moment & they nominally have a duty currently to only write factual information. If Fox News or the Daily Mail release videos and put their own signature to it, then you arguably lend them legitimacy ("it's on the news so it must be true. It's signed by them and all!").
I wonder if there's a startup opportunity there. In the coming years, this problem of deepfakes and lack of trust in media is only going to get worse. Crypto could mitigate this. Maybe a hardware company who sells very high end cameras for media outlets, that digitally signs and adds to a blockchain, all recorded media?
But why store the signature in a blockchain? If you do not trust the certificates in the first place, the storage location won't make any difference. And if you trust the certificates, the storage location is completely irrelevant. Because the certificates alone provide the trust.
For the same reason as certificate transparency logs; you want to avoid trusting something that has a history of certifying false statements. You also need to handle throwaways, so it's definitely not sufficient (and might turn out to not be necessary once a complete solution is found), but it does seem useful.
Well, my thinking was that you would want to store the data cryptographically signed, on a block chain, for the same reasons (more or less) that NFTs exist on a blockchain. Predominately, the public ledger of ownership seems like an important aspect of digital content. Is it necessary for trust? Not at all, but it certainly doesn't hurt it?
Disclaimer: I am an armchair crypto fan. Not an authority.
Then, when someone videos an atrocity, they need to choose between publishing publicly (risking retribution) or publishing anonymously (if they even know how) and risk being disbelieved.
Well, they could contact a news outlet on the condition of anonymity & the news outlet satisfies themselves as to the validity of the footage.
Similar to how anonymous 'tip-off' stories with protected sources work in general at the moment — the media outlet put their own reputation on the line on the basis of the source & we trust (to a certain degree) reputable news outlets to validate & vet their sources correctly. This is true for stuff that's easily forgable at the moment, e.g. a whistle-blower releasing documents.
They could use ring signatures (like what Monero and other do), where the signature only validated that it came from one of several possible private keys.
This feels like it has been true since forever, just with other media. A picture with a made up quote seems exactly as damaging. Good journalists will continue vetting sources and unscrupulous TV personalities will continue showing whatever fits their narrative without vetting.
It has been true only to a certain level. Photos can be faked and videos can be mislabeled, but as long as it happens it small enough number it is possible to have people point it out and make a fuss about it.
This changes when individual people with no resources at all can make convincing fakes and wield it as a weapon to sow disinformation, to have it then picked up by "major" media, all information on the net becomes pretty useless.
Again, this was always true, we relied on news agencies and such to be gatekeepers of what is true.
Sometimes a random video pops up and people believe it shows X and then it propagates but it's something completely different (e.g. people beating up immigrants on the streets in northern Italy -> traditional krampus celebrations; Junker drunk at some event -> Junker suffers from lombalgia; Berlusconi mimicing a sexual act on some woman -> it was a comedian skit; Britney Spears sex tape -> it's a random pornstar ...)
People will learn to be doubtful of random internet videos just as they have learned to be doubtful of random internet articles.
Or not, since they haven't yet, but it's not a qualitative change.
The majority of Covid disinformation was spread by 12 people with limited resources. They didn't need deepfakes. Deepfakes add nothing of substance to the liar's toolkit.
It's easy to lie to people so long as you're saying things that validate their shitty emotions. Conversely, it's extremely hard to tell people the truth when it goes against their shitty emotions.
The effect that emotions have on their behaviors can be shitty. Worse, they can be self-reinforcing, such that their emotions cause them to seek out ways to deepen that emotional state, resulting in an increase in shitty behavior.
Those feelings are valid, but the effect they have on other people is not. Dealing with valid emotions in a way that doesn't harm other people can be incredibly difficult, especially when those emotions put harm front and center.
Our emotions are valid, but our behaviors are not. It behooves us to mind our emotional states when they cause problems for other people. Often, that will coincidentally bring about an emotional state we prefer as well, but often with unpleasant transitions.
Emotions as such are good, from sorrow to rage to joy.
Each one is a good response to some situations, as far as I can tell.
Your emotional response can be misplaced, so that you experience an inappropriate emotional in some situations.
As noted elsewhere in the thread, you can also be inspired by your emotions to inappropriate (and just terrible) behavior.
The emotions themselves, though, are not shitty. Recognizing them and understanding where they're coming from can be tremendously helpful in aligning your actions with reality and your own values, and even in discovering what your own values are.
I share this perspective not out of a sense of superiority but in the hope that it helps someone else avoid my mistakes.
I don't see the point in saying emotions are automatically good, but also inappropriate. This just seems like a segregation of two concepts that don't need separating.
E.g. saying "I think people of type X are bad so I feel hatred towards them" is a good emotion but misplaced, vs is a bad emotion? I don't see the useful distinction.
People often try to eliminate emotions from their lives, whether some specific ones or all of them.
It's called "repression" and it can really mess you up.
I don't mean that emotional responses are automatically good - I mean that emotions, as abstract entities, are inherently good.
I'm not so sure I'd call "hate" an emotion, so I'll try a different example.
If I'm angry at someone because they told me I wrote a buffer overflow (and I actually did), that's an unreasonable and unhealthy response. The fact that I feel anger over it should impel me to introspection and working out why I'm angry over useful technical feedback. From there, I can move on to personal change so that I'm no longer inclined to be angry about good, helpful input.
If I'm angry at someone because they sexually assaulted my wife, that's a healthy anger response. They've mistreated her horribly and my anger on her behalf should push me to protect her and seek justice.
How exactly I act on that anger matters, though - physically intervening so they cannot continue to hurt my wife then calling the police would be good.
> It has been true only to a certain level. […] as long as it happens it small enough number it is possible to have people point it out and make a fuss about it.
I think the opposite is true. This isn’t a historical perspective, you’re using logic to speculate. Historically speaking, there have been fakes that reached huge numbers of people, and they were more damaging then than they are today because they were more believable; the public had not yet conceived that photos could be faked, and it was not possible to see evidence of fakery. Today, everyone knows photos and videos can be faked.
I don’t know of any deep fake videos yet that have tricked a large number of people or been used for political purposes. Maybe it has happened, I don’t know, do you know? But there have been lots of influential faked photos. Just Google a little to find hundreds of historical examples of famous and misleading doctored photos. (Lots of overlap in these lists, because some of the photos are famous).
Since the inception of photography, all photographs have been lies [1,2]. The only remedy is critical thinking and awareness and skepticism on the part of the recipients, which is being outpaced by the technology.
This has been true since forever though. (It's such a weird point.) Very believable photoshops have been possible since forever, but you generally only believe images that are verified by a trusted source. Even ridiculously fakable things like "someone telling you a thing is true" (without having photographic evidence of it) has somehow not been completely eroded as a communication channel by deceptive agents because of reputation and trust holding it all up.
> Now you will see videos of politicians saying something and even then you cannot be sure whether this is actual video or a fake
Back to text and the good'ol credibility of the messenger. Digital commodified journalists and now the need for credibility will let them get out of anonymity again.
Does anyone actually determine what to believe this way? Like if you read a quote from a politician in a large newspaper, you don’t believe it’s real, but if you see a cell phone video of the politician saying something at a rally, you do believe it’s real? Personally my confidence in the veracity would be the opposite. There’s nothing special about video that makes it fundamentally harder than text to distort, edit, or even outright fabricate.
To be fair, this tech has been around for years and I've yet to see it be used successfully in social media misinformation. The stuff I see on my in-law's facebook is usually some clunky meme photo with shocking text.
A video needs to actually be watched, that takes more effort, and then it would be widely debunked as fake. The "fake news" memes are usually at least partially true which helps convince people that the misinformation is legit.
I say it every time this comes up: People don't care if something looks legit. They care if it supports their views. Nothing will get worse just because the fakes get better. It might even help when it's common knowledge that everything can be faked by a 14 year old on their PC.
But people do care if it looks legit. Something that is legit supports their views better, even if it just looks legit and is actually counterfeit.
But totally agreed it needs to be common knowledge that everything digital can potentially be bogus. This stuff should be taught in schools from an early age honestly.
I think we'll eventually see hardware that cryptographically signs the content as soon as it is produced with timestamp, but then that could be fooled by someone creating a deepfake, projecting it on a high resolution screen and then photographing with another camera. Or we wouldn't believe things unless they are captured by multiple cameras at different angles and maybe future GANs will be able to cover that. We are seeing the beginning of an arms race!
As long as everyone wants their information for free, this will be just another layer of icing on the cake of misinformation.
Good, clean, and reliable information is expensive and needs a fair bit of work. I can see why most people have forgotten that but it might come back them and then this will be way less of a problem.
"Now you will see videos of politicians saying something and even then you cannot be sure whether this is actual video or a fake."
This is actually already the case with Biden and Trump video clips even without being deepfaked. Often they are presented out of context to the point of completely reversing reality. It's helpful to assume any clip is fake by default, especially if it's a viral one that makes one side look bad.
By the time deepfakes are common, it'll be best practice to assume fake by default.
Doesn't even need an actual quote taken out of context, just a headline (or a thousand) will already have an effect because people scan and can't be bothered to read the contents, until it becomes a background idea stuck in someone's head. People also forget where they read something, people forget details, and they simplify things over time.
I agree - this is going to make finding the truth so much harder once it’s weaponized as it surely will be. How can representative democracy flourish when you can no longer tell who you want to represent you?
Not to mention the politicians who are recorded doing something genuinely shady and will wave it away as fake news - this is happening already even without deep fakes.
Maybe it could mean journalism will be professionalized again, when trust isn’t as simple as taking a video on your phone or writing a blog post… if we’re lucky.
People will be less likely to believe leaks or supposed hot mic recordings, but the majority of what politicians and other public figures say happens in public view which makes it difficult to fake.
You already don't really know if a video you see on Facebook has been carefully re-cut to change the meaning or tone of what the speaker was saying, so I think the fact that we can more easily wholesale create videos doesn't really change much. If you want to know if something is real the best option is still to cross reference multiple sources and if possible multiple recordings.
This has been a possibility for decades but only for those with large budgets. Now that it is more widely accessible more people actually know it can be done.
> This has been a possibility for decades but only for those with large budgets.
Yep. You don't need CGI if you can search a large population for someone who looks like insert-public-figure-here and make up the difference with makeup.
I remember watching "The Running Man" when I was a kid and thinking that the scene were Arnold loses the fight to the Jesse Ventura guy, that in the movie was a "Deep Fake", was so unrealistic... man, we are already there. Nothing can be believed anymore.
Fifteen years ago it was not feasible for a random person with an axe to grind to publish a convincing video of a specific person doing something they did not actually do.
It's about to become pretty low-effort for a random person with an axe to grind to do that.
It's not that you can no longer believe anything published online - it's that video evidence without provenance was relatively reasonable to trust for a few years, and it's about to stop being so.
The problem is actually opposite. People won't believe anything because they assume that what they are seeing is a deep fake. It's happening already. If you see videos online of Trump / Biden / someone notable there's always someone claiming that it's a deep fake if they don't like what they see.
Hollywood hasn't had centralization in content creation for a long time. To the contrary, this past decade has been all about streaming service studios and indie content creators, while Hollywood continues remaking the same handful of scripts and plots over, and creating sequels and modernizations of old films.
as Ricky Gervais said in his infamous Golden Globes speech, "This show should just be me coming out going "Well Done, Netflix, you win everything. Well done."
Give it another decade or so, and I can see Lucasfilm/Disney making full feature films starring only deepfake 'clones' of the original trilogy characters in their younger/O.T. forms.
The implications for the entertainment industry are massive.
When I was working in indie game development, I wondered if you could use deepfakes as a voice actor. Basically get someone famous/good voice with infinite voice lines, without having to pay for studio time. Obviously, you would need them to sign-off on using their voice for commercial purposes.
Yeah, the acting part is a valid concern. A mod for The Witcher 3 does this to give the main character voiced dialogue[1], but it doesn't really sound.... right. I mean, it is voiced and some lines feel authentic, but some lines also just feel odd.
A markup language for voice that tells the generator how to inflect everything. It's not on the horizon yet, but anything that our voice can do, a computer will do someday.
It's a lot easier to write "cries like a baby" or "screams in terror" than it is to actually do it on command, over and over again, for take after take.
And one can even imagine a program with emotional slider bars that lets a person listen to how a line sounds with different levels of inflection and then automatically inserts the appropriate markup for the settings the user selects.
That percon can be replacable. Or it can be team. And you don't need to worry about the AI tiring or damaging their vocal cords after trying out different intonations all day. And eventually the there will be good enough automation to generate the intonattions too - either entirely or with minimal input from a voice director.
Yup exactly. Anything you would tell a voice actor to do you have in the markup. Obviously, the voice actor can still produce higher quality, probably for a long time to come.
I met someone a few years back who apparently worked in the field of ‘digital persona management’ which is basically an agent for actors’ likeness after they die. It sounded like families and estates were very interested in the concept as long dead actors could potentially become movie stars again in theory.
That sounds like a fairly big ethical dilemma that Disney will happily ignore if making a puppet show out of people's corpses earns them a few extra bucks
I suspect it will also lead to essentially "real" fantasy characters that totally replace real human actors. We already have many comic/drawn characters that people associate and identify with in a similar way they associate and identify with human actors; there is no reason why you would not be presented with "actors that don't actually exist in person. I cannot recall what it was called, but the industry has already produced a fully CGI movie that tried to push this very thing by essentially making a real like manga movie.
See the Harrison Ford vid linked at the end of the original article. Billy Dee Williams also gets inserted. There’s no doubt that this technique will defeat CGI Youngface.
The worst thing a lot of these deepfake folks do with movie clips is hire impersonators to try to make it more realistic. The problem is that a) the impersonator is usually off a bit in timing or character and b) the soundstage is nothing like the rest of the movie, it just sounds like cuts to a podcast. The result just doesn't work.
While even the new ones are not perfect, they are way better than what's in the movies/shows. Good they decided to hire the artist instead of just beating him with lawyers.
Lucus has always been good with things like this. Star Wars games pre-disney were generally easy to mod, and fan fiction writers didn't have to be too careful about getting sued.
As long as the creator wasn't making money from it.
It's one of the reasons why the franchise survived for so many years without new movies.
To the people complaining that what you see is no longer believable: there is a way out: signatures. It is time to start pressuring public people to digitally sign whatever they say.
> To the people complaining that what you see is no longer believable
I don't understand the complain. It's a movie: what you saw never was believable.
It's trickery by design. Everything you see as been spliced together from multiple takes, purposefully framed, lighted and colorized. It's all fake but in a way so culturally ingrained that people don't even notice the deceit anymore. You think this continuous action you are watching?
But it's the same. I think that the fact that people are so scared of deep fakes show they are not critical enough of what they are already shown. Images lie all the time.
Eh, maybe. People watching cinema "know" it's not real, even though propaganda is a thing I guess. Deepfakes won't alter anything with regard to that though, whether it's a deep-faked actor, a CGI monstrosity, or just a look-alike actor in makeup.
although you are right about video 'evidence', editing, cuts, and carefully muted dialog can alter things to the point of being the opposite of what was being filmed - an unprovoked attack can become self-defense or vice-versa, etc. Again, deep-fakery is just another tool in that unsavory toolbox, not anything paradigm shifting.
During then 2004 election, doctored photos of John Kerry at an anti-vietnam-war protest circulated widely. They were a simple cut-and-paste job. The world of "people will make fraudulent media to sell a narrative" has already been here for decades.
Plenty of videos are candid or shot and released by a 3rd party.
This could work for press releases and such, but not for videos with headlines like "CEO CAUGHT KICKING A BABY IN THE FACE" or "UNDERCOVER INVESTIGATION: PRESIDENT ADMITS ALIENS ARE REAL".
I think it would have to work through the camera industry to be effective, so the sensor actually includes signatures with the image data. Otherwise all of the 'hot mic' moments are going to be left unaddressed.
Image formats could be updated with an 'original' layer so that if the news site wanted to crop or edit the content, the original would still be avaliable for comparison.
Given all of the hardware design going into high speed hashing on asic, shouldn't be that hard to find a component to do the work.
Hopefully this gains traction, but it can't be with a "dousign" type signature that has no legal validity. We should champion Qualified Electronic Signatures
What about signing normal photos for the purpose of stopping misuse.. the approach of "if a photo is not digitally signed by each person on the photo for this specific case, it is assumed that the photo use is not fair"?
I'm not really into the tech behind Deep Fake, but doesn't the whole credit go to the tool? Or is it that one needs to adjust the tool somehow to produce decent fakes?
"Point this device at the thing you are looking at and press a button." is sufficient creative effort to generate copyright for photographs, I am not going to be hard on digital artists doing significantly more work.
Generate copyright, sure, but generate value that justifies hiring? You can’t take the camera away from the photographer and have it autonomously recreate the photographer’s taste and tuning. OP is asking whether that’s also the case with the tool shown here (the answer to which I don’t know, but I suspect it does require a fair bit of artistic tuning).
For hiring it obviously just depends on how many people can do the job and what compensation terms they will expect, as well as how much compensation employers will offer.
Well the person currently working at Lucasarts got paid to create a deepfake, now someone made a better version so why not pay them for doing a better job?
A lot of software consists of ready built models, but without the right parameters and constraints you'll just end up with garbage. I don't see a reason why this wouldn't be the case for deep learning models?
The fact that they hired him suggests to me that it's not possible at this point point in time anyway.
Deep fake tools are like ... well, any other digital content creation tool. The artist needs to do most of the parametrization, even though the algorithms come boxed in.
It's not just about the tool, it's about how to set up and use the tool, and I'm sure there's a lot of tweaking, too. It's cheaper to hire this person to set it up and run it for them than to try to get someone up to speed that's already on-staff, I'd bet.
If it is so easy, he wouldn't be the only one getting all those views. His results are far better than anyone else has been able to get, so that says there is a lot more than just running the tool.
Adjusting the tool for sure, and moreover, adjusting your data collection strategy to the scene, and applying manual fixes to the data or the output. Definitely an art as much as it is a science, much like Machine Learning in general. I'm not sure if the tool will even improve much with time, as I understand, most of the work is in the data collection, and quantity is definitely not better than quality.
No, I mean afterwards. They saw his vid, and must have thought, let’s try and fix our previous work using a similar method… and discovered even that isn’t so easy.
"Shamook is the one who “fixed” Luke Skywalker’s cameo in The Mandalorian to the tune of 1.9 million videos..."
I understand this is a mistake, we all make them, but where are the editors at The Verge? If a reader can find this with a cursory look-over, shouldn't they find it also? I couldn't have handed this in as a high school essay, so I'd imagine it wouldn't get past an editor at a large magazine. Maybe it's just me but at least from what I see personally there's loads of errors small and larger in the news these days. It's weird.
From what I have heard, there is less and less money per article in the market. They hire college temps to write (copy paste from other websites) and no one checks the articles. They have to make sure that whatever has an article on the internet, they have a copy as well.
I imagine you read plenty of "look-overs" without noticing them. I also imagine that the editors at (say) The Verge do find and fix plenty of mistakes that you wouldn't notice anyway. In this case there's a combination where they didn't notice and you did.
Proofreading is difficult because your mind subsconsciously fixes the text you are processing. Editors need to focus hard, but as we all know it's tough to maintain focus while doing monotonous tasks.
Proofreading is definitely hard, I've made tons of mistakes just on this site. The reason I bring it up is because it is more than a technicality and in the first bit of the article. I would also imagine that editors miss things all the time, but I really think that if one read this it would have been caught.
It's not a big problem, the meaning is still there, but it seems like a trend to me with online journalism at least, and I was wondering if others felt similarly or if I'm just being unfair.
I see. Some professional editor would have to chime in and tell us. :)
I as an amateur have proofread texts for my friends many times and I often missed very visible mistakes. But it's not my job of course, maybe specialists have some methods beyond reading carefully.
Speed is king in the content/clicks game. It's easy to go back and edit afterwards. Hitting publish as fast as possible is the only way. Some "articles" are published as just headlines and fleshed out afterwards. Really interesting to watch on a Bloomberg terminal for example.
> Maybe it's just me but at least from what I see personally there's loads of errors small and larger in the news these days. It's weird.
From what little I know about journalism (very little), editors at traditional media had a very opinionated stance about language and punctuation. It is to the degree that they're not merely finding errors like these, they're also suggesting rewrites for clarity, etc.
I, too, notice many simple errors like these that make me think that editor must not be as valuable a role as it was in the past. As the cost of communication dropped toward zero, an editor role becomes a more significant cost, maybe. Would the cumulative effect of errors like this one be enough to impact the readership of The Verge?
I definitely think the position has to have dropped in value, we're publishing more articles per day than before and most often online, in the land of instant corrections - there's no printing press to worry about. Though I would certainly appreciate if someone read them - there are plenty of independents, bloggers, etc that would catch this stuff in their own writing.
There's a lot of really low-effort articles written nowadays - maybe not so much for The Verge, but definitely for other platforms. They offer payment per word, and they can hire editors that also get paid per word of the article. But it's a race to the bottom, where people push to churn out as many articles as they can per day just to optimize income (or worse, to try and make ends meet).
I don't understand. What's the original vs deepfake comparison about? I know nothing of Star Wars (sorry), is there a third video it's based on or something?
There was a stand-in and the tv show computer generated the character replacing the stand-in. And then a YouTuber took that footage and made it look even more realistic.
One thing to note is that the 'original' (CGI actor) was already a bit awkward because things like the lip sync were off compared to a real actor.
That said, the facial animations for Leia in another video he made (they did a digital version of Leia for Rogue One, he deepfaked on top of that) actually improved with the deepfake version.
A key difference is that in at least the case of the movie "original" they were digital 3D heads, complete CGI. The youtuber's deepfakes are old footage that an AI overlays onto the face.
Well, why are good actors so sought-after? What makes someone an actor whose likeness people seek and what makes a B-tier actor?
Part of it is inertia and random celebrity status, having an attractive or interesting face etc., but part of it is also the raw knowledge of when to apply certain microexpressions, how to gesture etc. i.e. how to do the acting itself. To be a convincing, charismatic etc. actor it's not enough to wear a digital mask of a celebrity, the underlying actor still needs to act well. That may not be so important for certain types of shallow movies, but it certainly is for deeper drama films etc.
It's similar to today's text generation where you may be able to generate sports game reports, user's manuals or travel brochures etc. but not really those where you need high level decisions, like applying the appropriate expressions to a real-world event, taking into account all the context, like writing a poem about your feelings reflecting on some recent real-world event.
I'm not saying humans have a magical power that can't be implemented in silicon.
What I'm saying is that deepfakes as they are today are not sufficient to replace actors. You'd need a higher level puppeteering AI that would take the whole storyline and script into account to come up with the right ways to express the appropriate emotions at that moment in the film and could take the director's instructions regarding his vision of how the drama should unfold etc.
One example where it shows that the actor doesn't just provide a face, but, well,the acting, is back to the Future. Originally they had wanted to cast Michael J Fox, but he wasn't available. So they picked a different actor. And that actor didn't get that the movie was supposed to be fun. They shot several weeks with this actor who was turning the movie into something very serious, being terrified by being transplanted into the past, and finding it tragic to come back to a present where everything is different.
Theres a good documentary about this [1], that talks about replacing that actor, and when Fox comes on set and delivers the first line filmed ("You put a time machine... In a DeLorean?!"), it's hilarious, night and day difference.
[1] season 2, episode 1 of movies that made us www.netflix.com/us/title/80990849
I think all the stuff you describe about charisma is absolutely true in the creative mode of generating value, but there is also an extractive mode of generating value for which I absolutely think deepfakes could be very effective. Once an actor establishes an audience (generally through the creative mode you describe), there is still an opportunity to extract as much value as possible from the remaining good will of the fan base. This already happens with famous actors producing cheap and unpopular movies that seem to only exist to put that actor’s name and face on the poster.
There is a company doing this for video game voices. The voice actors who provide training data get royalties. I hope that becomes a standard for deep fakes and the ML in general, as opposed to how copilot (and the rest of the industry) generally just takes whatever they get their hands on as free train training data.
I doubt Hollywood would want this, but I'd love to be entertained by choosing some base story, then be able to pick the lead actors and perhaps setting and mood. Deep fakes get a ways toward this. This is basically what remakes are.
Is it just me who thinks the title is sensationalised? I can barely see any difference between the two, and the one made by the YouTube builds on top of the original one.
It's not just you. I can't tell the difference either.
Maybe it's just something someone has to explain to us what to look for. It could be a curse to know why some people think the deepfake is better. I once talked to a graphics artist about why they thought some effect looked bad because I didn't see it. They explained it in detail. Now I can't unsee what they were talking about and can easily spot that mistake.
I don't know. Maybe it's better not knowing and enjoying the results or knowing and each time you see it thinking: "Hey, they made that mistake"
I think the key difference is how the eyes are “reanimated” in the deepfakes. Rendering realistic eyes (with that trademark “spark of life”) has always been a challenge in real life emulating CGI, but now these new clips kind of capture the spark in the actors eyes much better. Luke Skywalker's eyes in the original clip kind of look dead plastic, the kind you’d find in real-time gameplay (not to mention this nose also looks like the product of bad rhinoplasty). The deepfake does this much better, in my opinion.
Elsewhere in this thread, someone posted a similar comparison clip about The Irishman and the difference is even more pronounced there. De Niro’s eyes actually look the age they’re supposed to be in that scene. The original scene had two key problems, in my option; one, they decided not to touch the eye and focused on smoothening the skin (I assume this is because of the technical limitations of doing the deaging in painstaking CGI); two, and this is a little more of a mystery, is that they thought they could get away with using De Niro’s current 80 year old voice on a character that is supposed to be 35 (ages are ballpark numbers). The raspy voice of an old man is just not something you expect out of a supposedly much younger and healthier man. They should have just gotten a voice actor who could do a convincing impression of De Niro in the 70s and dubbbed him in.
> They should have just gotten a voice actor who could do a convincing impression of De Niro in the 70s and dubbbed him in.
The director's focus was on the most genuine De Niro expression, not on the most impressive young De Niro impression.
Scorsese doesn't make tech demos, he makes stories with characters, and he respects his actors a great deal.
What could've been done here is use another AI technology, deepfake for voice, basically, train it on young De Niro's voice, and then reproduce De Niro's own lines with his younger voice, using the original lines as input (not just text). No voice actor could match a performance THAT closely as AI can these days.
This was the big winner for me. The deepfake had "Tarkin" and "Leia" on screen. The originals had 3D GCI (very good CGI, but CGI) with dead eyes.
To further my anecdoe, I totally missed Leia's sequence the first watch through because I was watching the content and zoned out, forgetting to evaluate her face for deepfake pixels because it looked real enough for me to suspend disbelief.
I have never been able to look at Leia's face in the original Rogue One scene without forcing myself to say "they did their best, they'll redo it someday for the 8K release, until then grin and bear the dead eyes".
Building "on top of the original one" is not an asset in the deepfake process, it's a hindrance, because then the deepfake inherits the unnatural 3D facial animation, which is most of the reason why traditional VFX 3D facial replacement works so poorly.
The deep fake one looks very weird to me. The mouth is sometimes not closed and there is minimal movement on the upper lip on words that would need more.
The deepfakes look much better than the original, especially Leia, whose CGI recreation in Rogue One looked odd from the beginning. Tarkin is ok just like Luke, especially their eyes which now seem to be looking to whom they're talking to, thanks to better reflections. Luke's mouth however is still unrealistic when matching the speech; for example at 0:32 when Luke says "He wants your permission", the lips don't even touch to create the "P" sound.
I've seen some deep fakes that are very convincing, especially if you show them to someone not knowing about them. One example would be this one in which Sean Connery's face was replaced by Burt Reynold's in James Bond:
https://www.youtube.com/watch?v=foqeQM-7PSg
Truly amazing what can be done with consumer grade stuff nowadays.
ILM isn't dumb - this is a talenthire for sure. Amazing work from this guy and dedication to the niche (and they likely have some ideas kicking around in Marvel/Lucas writing rooms that are about bringing alllll the olds back to life)
Could you describe your understanding of what that term means to you? I always hear it referenced but it haven't really clicked to me what people mean by mentioning it (reading the articles barely help for me).
I just find it interesting, notably the stages part. Authenticity of information is a problem in today's society, but really it's false trust in information. How many people get their "news" from Facebook? What lies beyond trust, authenticity, and information itself is an interesting thing to think about. Is there a post-trust society that isn't disorderly and chaotic?
Not really. It's still an art form. The output of a tool is only as good as its operator. Would you hire the developers of Photoshop to draw your paintings?
It depends, how much did the software, and how the artist and is it an established software field or a new type.
If it is in the early days of the new software field, I would favor the developers. So as photo manipulation was a new feature I would hire the developers.
I know the narrative is to be scared of this tech (maybe even push towards legislation of it). But me personally I just find these things amazingly awesome and super cool.
Scared if it's used to impersonate influential people and spread misinformation, but cool when used in media. It's a difficult one.
Mind you, impersonating influential people in a convincing fashion is / has been a thing for a while now. I'm thinking of Forrest Gump hanging out with the president and the like.
If Photoshop could be likened to giving a humanity a loaded revolver, I feel like deepfakes are like handing an AK-47. I'm hopeful but I'm also cautiously seeing how the world embraces this tech.
This reminds me of Bojack Horseman plot-line where they scanned every actor on set in case they die so they can recreate them in virtual for the movie.
(It's inspired by a book that I haven't read, so I'm not sure if the same idea we're discussing is also in the book... But it's definitely in the movie.)
Before clicking the link, I thought this was going to be a completely different science fiction story about (U.S.) congressmen who leave the business of legislative voting to AI copies (brain scans) of themselves while the original travels full-time to raise funds and votes. After a secret congressional hearing is held on a matter of national security, the scanned copies begin voting in ways counter-to or that don't make sense to the original versions, but the copes are not allowed to reveal what was in the content of that hearing that changed their minds.
I don't remember where I read that plot synopsis, so I might have some details wrong. (If it's a book I haven't read it, or it could have been someone's description of a story idea.)
Some years ago they didn't even need to scan them, they just used old footage to create 'virtual reality' / 'hologram' concerts for Elvis Presley, Michael Jackson and Tupac.
Lucasfilms should have stopped making Star Wars after The Return of the Jedi. Everything since has been abysmal. That's just a fact no amount of deepfaking will change.
Obviously, the labor-intensive nature of today's CGI techniques drive up production costs. Meanwhile, the deepfakes on YouTube provide a convincing enough rendition of likenesses without actual actors, all produced on consumer-level GPUs. This presents a huge potential to save costs and the benefits are clearly enticing to film productions.
As Hollywood gravitates towards blockbuster franchises, productions will want to bring the same ensemble of actors (or at least their likenesses) as long as possible. While moviegoers may be unsettled by seeing "reanimated" dead actors like in Rouge One, they still may hope to see franchise actors to look consistently youthful or attractive on screen. Deepfakes may be more relied upon to provide that effect.