Last night I watched a YouTube video from Philip DeFranco, and the first story he covered was about a tool being used to digitally transplant celebrities faces into porn videos. Motherboard (NSFW), The Verge, and other tech journalists are reporting more on this “AI-assited” face swapping, if you’d like to read those accounts. However, I want to take this post in a different direction. The bigger threat, in my view, is that such tools might be used against politicians, children, and loved ones.
The Technology

Image from this Scroll.in article.
The above face swap is possible with a desktop application, Fakeapp, which uses algorithm to render such videos. Much like photoshop can be used to alter photos, algorithms can modify videos yielding increasingly lifelike substitutions of reality. And that’s part of the problem.
The Problem
With reality already in question in a world of “alternative facts,” such technologies will only add fuel to the fire. How will we distinguish reality when any “permutation of truth” can be generated by a computer rendering video?
Another concern is that these tools are becoming increasingly accessible, meaning anyone can be a producer and thus anyone can be a target. Consider the following cases:
- Politicians could have their reputations destroyed or claim plausible deniability of any action
- Ex-lovers could share revenge porn without needing to have filmed anything
- School bullies could use this as another tool against their victims
- And generally, the truth could be twisted and manipulated in more imaginative ways
On top of that, machine learning will eventually lead to more accurate fake videos. Since these tools can also manufacture non-malicious (meme) productions like adding Nicholas Cage to every movie, there will always be an audience for these technologies regardless of whether they’re used for corrupt intentions or not.
All of these concerns bring two things to mind. First, in recent conversations Chris Gilliard has brought up the fact that we don’t “own” our faces in the sense that companies are actively subjecting them to facial recognition and tracking without (and sometimes with) our consent. This policy on facial recognition and the normalization of these ideas and technologies are part of the groundwork that led to such face swapping algorithms. Second, honestly, I’m unnerved by this use of peoples faces, so I want to consider what we can do to combat this.
What Can We Do?
Stay Informed And Critical – Knowing about these technologies and the fake videos they generate is vital. Being able to spot content that appears suspicious and develop the digital and media literacy skills to navigate this world is where we should start. (That’s why my focus on professional development this semester centers around fake news, information literacy, etc.)
Explore Further – Videos generated using these specific technologies started surfacing in December. The more we know about the software and algorithms that produced them, the greater chance we have of identifying false footage. For example, do FakeApp videos have a digital video fingerprints or metadata elements that can be identified? Are there techniques we could use to trace the source of these videos? Just as Google Image Search can be used to trace photoshopped pictures, are there analogous solutions to trace fake videos? These are just a few of the questions that need to be considered.
Engage our children and our students (and others) – Most importantly, we need to discuss the implications and morals around using these technologies with our children and students. We need people to reflect when making decisions about the media they produce. This means we need to engage students in thinking critically and empathizing with others so they know it’s not okay to (use these tools to) harm or mislead others. Not to mention, being aware of these dangers will prepare people to react appropriately should they encounter fake videos.
Overall, there’s a lot of work to be done around this topic, but I’m hopeful that with awareness, engagement, and positive influences, we can make the digital world a safer place.
Special thanks to Chris Gilliard & Lauren Horn-Griffin for giving me feedback on this post.
The featured image is provided CC0 by Magne Træland via Unsplash.