The Future Is Here And Videos Can Lie Now

We’re here! AI videos are reaching the point where they’re almost indistinguishable from reality, if blurred and shaken up a little. It’s the point people have been warning about – with no way to verify the average video you see scrolling through social media, misinformation can spread like wildfire.

Here’s a thought experiment: pretend that it’s the 1910s, and cars are not very good. Their maximum speed is approximately 20 mph, which is faster than horse and buggy carriages, but not by much, and they don’t work so good on uneven ground. When cars come to your town, do you worry about their speed? Yeah, maybe. A little bit. But only one person has a car, so you go and talk to your neighbor before you try and make a law to keep the speed on the road down. He’s cool, he can be trusted. Suddenly, though, Henry Ford has revolutionized the carmaking process, and not only can they go faster, people who could only barely afford their horses can now afford a car. This means there’s now a lot more cars. Your neighbor was responsible, but that guy you don’t like three houses down the road is driving like an absolute maniac, simply ‘because he can’, and the sheriff can’t stop him because the sheriff still only has a horse. People are getting injured, regardless of what they’re driving, because of this new technology.

This is the problem we keep running into with new tech: when it first comes out, the actual potential of the thing is ignored for its current state until it’s ‘too late’ to do anything about it. First, the AI video makers weren’t good enough to actually make a believable video. Nobody cared about making any sort of rule or tag or filter, because they were so obvious it didn’t matter. Then, deepfakes came out, but the tells were still visible enough to disprove one; since it mostly affected people with huge archives of videos of their face (actors, social media influencers, etc.) no single video ever got too far before being taken down, since the people being targeted usually had the resources to fight or sue. We are now at the point where AI videos are good enough that the average person can’t tell a video is faked unless it’s being pointed out to them, and unless you’re familiar with the person being portrayed, any one individual could be faked. This is bad! This is really bad! Politically controversial figures cannot establish a baseline truth in a world like this. ‘The opposition made a deepfake? Hah! He just doesn’t want to admit he said something dumb out loud.’ Or, ‘the opposition claims that video is real? No, I don’t believe it! You can tell it’s a deepfake by…’ and then everything has to be triple and quadruple verified. Is that going to happen for the average person? When someone makes an enemy out of a rando online who’s technologically literate enough to make a deepfake, will they be able to disprove it beyond ‘trust me, I didn’t do that’? This is already a problem. It will get worse.

Perhaps the easiest way to live like this is to simply assume videos that make you angry are fake, until proven otherwise; while funny and uplifting videos are also frequently faked, they’re less likely to create angry mobs out of otherwise loving neighbors and upstanding people who are trying to right a wrong in their community.