You might have seen those videos of “Tom Cruise” on TikTok, or maybe you saw someone’s face superimposed onto Superman. Deepfakes are getting better by the day!
Deepfake Software
Deepfakes are a species of visual edits that use pictures and video, combined with AI, to create something new! The AI uses a pre-existing video and a library of photos to replace one person’s likeness with another. If you have the pictures for it, you could deepfake your face onto Chris Hemsworth’s body, and other such shenanigans. And deepfakes aren’t just for videos! They can also be used to create better still images as well. Where Photoshop relies on a human’s touch to make it believable, deepfake tech can create a realistic still mostly by itself given the tools.
That’s the catch, not all deepfake AI has all the tools – some deepfakes are noticeably worse than others, for a couple of reasons. The tech is still pretty new, so most programs are still ‘learning’ what is and isn’t possible for a human face. The second issue is the quality of the images fed to the deepfake – if the images don’t give the deepfake enough information to accurately recreate angles, it’s going to have to get creative. This is a bad thing when you’re trying to make a believable video.
Celebrities Vs. The Average Joe
Deepfakes rely on data, so if the software doesn’t have much data to work with, the resulting deepfake looks…uncanny. Even really, really good deepfakes right now, with a ton of data, look a little uncanny. Picture the last movie you saw a dead celebrity in – you probably realized something was wrong even if you didn’t know they were dead, like General Tarkin in Rogue One. He’d had his whole head scanned at high quality before he died, and he still looked a little strange on-screen. It was little things, like his neck not moving perfectly with his mouth. Young Carrie Fisher at the very end of Rogue One had a noticeable grain due to the source images, and that same young Carrie Fisher in The Rise of Skywalker looked strangely plastic even in low, indirect light.
The average person doesn’t have enough high-quality video or images from even one angle for deepfake AI to make something believable. It only takes a split-second of slightly misplaced nose or mouth for someone to get creeped out by whatever you’re making and identify it as fake. The uncanny valley is instinctual, but it’s reliable! It takes serious work to overcome that instinct. If Hollywood can’t manage it, is there anything to worry about for the average person? Well… yes. Because the average person has access to it, and the tech is always getting better.
Controlling it
How do you control it? Big stars have to deal with their image being stolen all the time. If anyone’s prepared, it’s the celebs, who have to fight magazines and movies alike to be represented like they want to be. But what about the average folks when it starts to bleed downwards? Minor politicians, or competition for the cheerleading squad? Or explicit images made specifically to harm someone’s image, made by an amateur with juuust enough knowledge to make something that, at first glance, looks believable.
How do you account for that?
Lets look at the Tik Tok Tom Cruise account. The creator has gone out of his way to make it clear that Tom Cruise’s likeness there is not real. Even so, the videos are jarringly realistic. He used a Tom Cruise impersonator as the ‘base’ for the deepfake, and the end result barely catches any uncanny valley at all. He just looks a little stiff. That guy’s videos are still up, because it’s obviously not really Tom Cruise no matter how realistic it is.
And then there’s an account that’s putting Charlie D’amelio’s face on their own body, in an attempt to impersonate her. Tik Tok is removing these because it’s not obvious that it’s not Charlie, even though the quality is worse. Someone who watches it more than once is going to recognize that it’s not Charlie, but it’s still getting pulled, because it’s not being clear enough. They are crossing a line.
There’s also a distinction between the two for intent. ‘Tom Cruise’ is showcasing his technical skill, the Charlie impersonator is trying to be Charlie.
Legally, copyright law does have some precedent from: if an the music and art world: if an impersonator is so close in performance to the original that an average person can’t distinguish it from reality, then they’re violating copyright. Singers use this when covers get a little too close to the original. See Drake songs, for instance: the only covers you’ll find on Youtube are by female singers or men who sound totally different, because he’s very strict on his copyright. When the audience can’t tell them apart, they’re pulled.
The problem is enforcement. The average person is not going to have to time or resources to hunt down impostors and report them all. Charlie is famous on Tik Tok, but if she wasn’t, Tik Tok mods likely wouldn’t actively hunt down these impersonator accounts for her. If someone really, really hated an obscure user, they’d be able to overpower their reporting efforts with fake content, and that fake content only has to be believable enough for someone to scroll past it and think “wow, I can’t believe they’d do that”.
The average person is not equipped to scrutinize every single little bit of media that comes their way, it’s exhausting and unrealistic to expect that of them. It’s how disinformation campaigns work. If the deepfake is believable enough, and the original’s not aware of it, that deepfake may as well be fact for everyone who sees it and doesn’t realize it’s fake.
Implications
If you’re online a lot, you might have heard of that new Mountain Dew ad featuring Bob Ross’s likeness. This was… weird, to a lot of people, and for good reason. Using a person’s likeness to sell something has been a matter of debate ever since money became mainstream – you’d probably sell more spices if you said the king bought from you back in BC times. But normally the person is able to call them out for it. Now, with deepfakes, you can make celebrities say anything post-mortem, and nobody but the estate will be able to challenge it.
And, even if the estate gives permission, how specific do you have to be about that image? Actors struggle with Paparazzi images even today – Daniel Radcliffe famously wore the same shirts and pants for weeks while filming a movie, so the paparazzi’s images of him were worthless. Imagine having the ability to put Daniel Radcliffe in any pose or outfit you wanted for the front of a magazine. The person wouldn’t make unflattering faces for your pictures before they died? Well. Now they will.
Presumably Bob Ross’s estate allowed the use of his image, but in the same way we don’t take organs from dead bodies without consent of the deceased, maybe we shouldn’t allow the selling dead loved ones’ images for advertising purposes, without their consent beforehand. Especially now, when it’s easy to deceive people with this tech!
Is There Good?
And then there’s the other side of the spectrum, where deepfakes can be used to bring people back to their glory days, or color black-and-white movies. They can be used to de-age actors, as seen in Captain Marvel, Star Wars, etc. Samuel L Jackson was 40 years younger thanks to deepfake tech, and Mark Hamill appeared as he was forty years ago for another Star Wars series.
Deepfakes, given the tools, do a better job of recreating someone’s face than human-controlled CGI ever could. They could have been used to make Henry Cavill’s Superman in Batman Vs. Superman mustache-less, instead of whatever they did instead that made his face look unsettling. He couldn’t shave his ‘stache because he was also filming Mission Impossible at the same time, so the only way out was either prosthetic facial hair, or CGI-ing over it. They picked the CGI. People noticed. Deepfake tech might have made his mouth’s movement a little less uncanny.
Deepfake tech could be used to disguise facial injuries, like Mark Hamill suffered during the original Star Wars trilogy, or create alien races without the heavy prosthetics traditionally used or sweatshop CGI-studio labor. They could make dubbed movies less visually jarring, and line up actors’ mouths with the words they’re supposed to be saying.
Deepfake technology is a very double-edged sword. All the good it could do isn’t outweighed by the bad. It’s dangerous technology, and in a world that’s increasingly using the internet to share information, disinformation is a powerful pollutant.
Sources:
https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them