November 17, 2022

The Deepfake Era: A Brief History

Around five years ago, the term deepfake was coined on Reddit.

Since then, most people will have seen a deepfaked video or photo. It’s a technology that uses a type of artificial intelligence to replace real audio and video with audio and video that is synthetically created from different sources.

 The premise of this technology originated in the 1997 paper “Video Rewrite: Driving Visual Speech with Audio” by Bregler, Covell, and Slaney; the project used “existing footage to automatically create a new video of a person mouthing words that she did not speak in the original footage.” A suggested use for the technology was movie dubbing.

Deepfake technology has the capacity to be used for comedic effect or to make a light-hearted video for friends. The technology was being used to prank friends and create online content even at the tail end of 2019; one YouTuber deepfaked his co-hosts face onto famous movie scenes for a prank, an example of how this AI technology can create light-hearted entertainment content.

This AI technology, however, has a darker side.

On October 21, BBC News told the story of Kate Isaacs, a campaigner who was tagged in a video on Twitter that depicted the image of her face deepfaked onto a porn actress’ face. 

In fact, the majority of deepfakes are not used in a comedic and light-hearted way. In 2019, statistics by Deeptrace revealed that 96% of deepfake videos they found were pornographic and non-consensual.

An even more sinister avenue of the technology arises through its potential impact on the current news and political landscape.

When asked to think about deepfakes, a memorable video that people may think of is the viral 2018 video of Jordan Peele’s voice dubbed over a deepfake of Barack Obama. It garnered 9.2m YouTube views for BuzzFeedVideo.

The deepfaked video, paired with Peele’s audio, demonstrates the potential dangers of deepfakes in relation to the number of unverified facts that circulate the internet as news. Even the image of prominent politicians such as the former US President can be subjected to AI technology.

According to a survey by iProov

people are most concerned that deepfaking could make it harder to trust what they see online. Deepfakes blur the boundary between what is real and what is fake, making it more difficult for people to verify online content.

The iProov survey also reveals that 80% of global respondents would be more likely to use an online service if it could prevent deepfakes.

According to this statistic, online service providers would gain more users if they developed a defence against deepfake technology, revealing both ethical and business incentives for online services to protect against deepfake videos and images.

The future of the deepfaking phenomenon may depend on what online service providers will do next and whether the internet culture around identity protection changes. 

Looking to the future, perhaps more positive uses of the technology may emerge. Forbes reported on the business uses of deepfake technology, stating that “companies can create more efficient methods of distributing information to their customers and employees” if creating video content becomes quicker and easier.

Whether it is through continued productive use by businesses or reverting back to some of the light-hearted content the technology has the capacity to produce, deepfaking does not seem to be going anywhere.

Share This: