SourceThere are positive uses for deepfake technology like making digital voices for people who lost theirs or updating film footage instead of reshooting it if actors trip over their lines. However, the potential for malicious use is of grave concern, especially as the technology gets more refined. There has been tremendous progress in the quality of deepfakes since only a few years ago when the first products of the technology circulated. Since that time, many of the scariest examples of artificial intelligence (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could create for communities.
The first exposure to deepfakes for most of the general public happened in 2017. This was when an anonymous user of Redditor posted videos that showed celebrities such as Scarlett Johansson in compromising sexual situations. But, it wasn't real-life footage—it was the combination of the celebrity's face, and the body of a porn actor fused together using deepfake technology to make it appear that something happened in real life even though it was faked. Celebrities and public figures were originally the ones susceptible to the charade since algorithms required ample video footage to be able to create a deepfake, and that was available for celebrities and politicians.
When researchers at the University of Washington posted a deepfake of President Barack Obama and then circulated it on the Internet, it was clear how such technology could be abused. The researchers were able to make the video of President Obama say whatever they wanted it to say. Imagine what could transpire if nefarious actors presented a deepfake of a world leader as a real communication. It could be a threat to world security. With cries of “fake news” commonplace, a deepfake could be created to support any agenda to fool others into believing the deepfake is an authentic representation of what someone wants to communicate.
Other high-profile examples of manipulated video include an altered video of House Speaker Nancy Pelosi, that was retweeted by President Trump as real, that made it look like she was drunkenly stumbling over her words. In this case, the timing of the video was altered to create the effect, but many believed it was a true depiction. Two British artists created a deepfake of Facebook CEO Mark Zuckerberg talking to CBS News about the "truth of Facebook and who really owns the future." This video was widely circulated on Instagram and ultimately went viral.
Deepfake Technology Rapidly Improving
Deepfake technology is improving faster than many believed it would. In fact, researchers have created a new software tool that allows users to edit the transcript of a video to alter the words—add, change, or delete—coming out of someone's mouth. This technology isn't available to consumers—yet—but examples of what has been done illustrate the ease with which the tool can be used to alter videos.
Deep Video Portraits, a system developed at Stanford University, can manipulate not only facial expressions such as can be seen in the President Obama deepfake, but also myriad movements including full 3D head positions, eye gaze and blinking, and head rotation by using generative neural networks. Even though these videos aren’t perfect, they are incredibly photorealistic. This could be super beneficial for audio dubbing a film into another language and, as the researchers realize, could be abused as well.
Samsung’s AI lab made Mona Lisa smile and created a “living portrait" of Salvador Dali, Marilyn Monroe, and others using machine learning to create realistic videos from a single image. The system only requires a few photographs of real faces to create the living portrait which could be cause for concern for "ordinary people" who thought that they might be immune to deepfakes because there isn’t enough video footage of them to train the algorithms. Samsung’s AI shows that it can make realistic videos with more general video footage of a wide range of people rather than only use video specific to the “star” of the deepfake.
There are even more disturbing capabilities out there. A programmer launched a free, easy-to-use app called DeepNude that would take an image of a fully clothed woman and remove her clothes to create nonconsensual porn. Just days after the app’s release, the anonymous programmer shut it down. It’s hard to imagine anything but misuse for this app.
So, now that we know it's out there and getting even more realistic and easy to use, what do we need to do to protect ourselves and others from misuse? That's a huge question with no easy answers.
Should social media companies be forced to remove videos that are deepfakes from their networks? Does it matter what the intent of the video is? Is there any way to separate entertainment from maliciousness?
Some researchers suggest that it’s better for ethical developers to continue to push the envelope when it comes to this technology so they can warn what’s possible to encourage more critical analysis of video content. Others argue that this work just makes it easier for unethical people to extrapolate the learnings for their own misuse.
Also, AI might be behind deepfakes, but it can also be very instrumental in helping humans detect a deepfake. For example, software company Adobe has developed an AI-enabled tool that can now spot deepfakes of images.
However, we can't merely rely on software to do the job for us. As deepfake technology is here and getting better every day, it would be prudent for us all to remember to critically assess the authenticity of videos we consume to understand their real intent. This means not just relying on the quality of the video as an indicator of authenticity but also assessing the social context in which it was discovered—who shared it (people and institutions) and what they said about it.
Welp, goodbye democracy since you can't keep an educated voter base with this...