From celebrities to politicians, the digital world is flush with videos and photos that are polluted and edited—or deepfake. These new techniques have given rise to alternative versions of reality and introduced a host of ethical dilemmas as well as security concerns. In a world that’s increasingly reliant on video, the implications of this emerging technology could be far-reaching. From social media to security, here’s everything you need to know about deepfakes and their impact on society.
Not sure what a Deepfake is? It’s a video or audio clip that has been altered to show something that never happened. The term “deepfake” comes from an acronym: “Digital Fake.” A common example is when someone’s image is digitally edited to replace their face with someone else’s. This can be done through the manipulation of video or by combining the audio from one clip with the video from another. Beyond that, the possibilities for what a deepfake can show are limited only by imagination.
Deepfakes are created by feeding a computer an algorithm that teaches it to recognize the features of someone’s face. Then, that software scans thousands of images of that person’s face to create a model of what those features look like at different angles and in different lighting. The process is called “generative adversarial networks” or GANs for short. Basically, one algorithm trains another algorithm to recognize faces and then manipulate them. As this technology becomes more accessible, it’s being used to create ever more realistic fake videos. The longer you spend working on a deepfake, the more realistic it becomes. And the more realistic it becomes, the more difficult it is to spot the fake. That’s why it’s so important to be attentive about spotting deepfakes when we encounter them.
The main issue with deepfakes is that it’s almost impossible to tell if a video is forged or not. If a video of a politician saying something controversial goes viral, it could be difficult to tell if it really happened. One potential solution is to add a digital watermark to the video or audio that would say, “This video may be forged.” But we don’t know if that kind of technology would work. When a fake video and real audio are combined, it’s even more difficult to spot the fake. We have no idea what the solution to that concern would look like. And while this technology is still in its early days, it’s already creating problems in a world that will soon need answers to its non-ethical designs
More than just causing confusion about what is real and what is fake, deepfakes could also be used to compromise computer security. A powerful enemy could use these doctored videos to try and gain access to sensitive information or computer networks. They could trick an employee into thinking the fake video is real and trick them into giving up their login credentials or granting access to sensitive data. While it’s unclear whether or not this kind of attack is currently happening, it’s important to be aware that it could happen in the future when getting into the wrong hands.
Beyond being exploited to spread misinformation and manipulate people, deepfakes may also be used to create new computer-generated content in the future. It’s possible that there may be a new streaming service that features computer-generated actors or that allows you to create your own content with visuals. Imagine a future in which you’re able to create your own video game with your friends. You can fight aliens with your homies in a virtual space that looks and feels real. Or a future in which you can create a convincing video of yourself giving a presentation at a conference or teaching a class. These types of services could be incredibly useful. And they may come with the same privacy and security risks as the fake news uses of the technology.
The technologies that make it possible to create realistic deepfakes are still in their infancy. It will be years before we fully understand their implications for society. In the meantime, it’s important to remain vigilant about spotting false videos and understanding the potential dangers of deepfakes.