The premiere of the second season of BBC drama The Capture has put the spotlight on deepfakes and how the technology can be misused. However, not everyone is familiar with what a deepfake is. They should be.
If you’re an avid scroller on social media, chances are you’ve encountered a deepfake. The latest, and possibly most sinister instalment of mind bending computer imagery, deepfakes are created through the use of artificial intelligence (AI) replacing someone’s looks with another.
Whether it’s Donald Trump claiming he’s signed up to Russian Youtube, the late Queen Elizabeth delivering a Christmas message while doing a TikTok dance or Elon Musk as Charlie Sheen – you’ve probably already seen one. Deepfakes are extremely hard to distinguish from reality, no matter how absurd.
While most deepfakes found on social media are purely made for entertainment and comedy purposes, many experts believe they could pose a serious threat to the public.
“Fabrications and disinformation, in one form or another, have been around for centuries,” Martin Scott, lead analyst at Analysys Mason, tells Verdict. “[Deepfakes] are just a particularly effective modern twist on that.”
As AI technology continues to grow and become more widely accessible, the ability to decipher between what is real and what is fake grows harder.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataWhat is a deepfake?
Deepfake as a term stems from “deep learning”, the AI technology used to create it.
Deep learning is a neural network with three or more layers where algorithms teach themselves how to problem solve when fed a large amount of data.
The most popular way to create a deepfake is by using these deep neural networks with the use of an autoencoder to carry out a face-swap. Creators will find a base video and then a bunch of further video clips of the individual they want to insert into the target.
Through the use of deep learning AI, the autoencoder is able to understand what the base person looks like from a range of angles by studying the video clips. It will then place the person onto the base target by finding similarities.
Deepfakes came into the public eye when they began being shared on a subreddit called r/deepfakes in 2017. Disgustingly, it was mostly people sharing porn videos with female celebrities generated onto the face.
Sensity, a research company tracking online deepfake videos since December 2018, said between 90% and 95% of them were nonconsensual porn, MIT Technology Review reported in 2021.
Deepfakes go dangerously further than just video
Deepfakes can go much further than just face swapping on a video. Deepfake audio is also on the rise and has caused some very convincing fakes.
Through the same deep learning algorithms, the voice of someone can be replicated to a very realistic degree.
Back in 2019, it was revealed thieves successfully managed to get a wire transfer of $240,000 after using deepfake audio to replicate a chief executive’s voice.
The fake CEO asked for “immediate assistance to finalise an urgent business deal.”
“The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” a spokesperson for the company said.
Deepfake audio is not always needed to scam people. Sometimes, all con artists need is a great impersonator. Actor and Get Out producer Jordan Peele proved this fact in in 2018 when he released a deepfake video where his voice was superimposed on a video that pretended to feature former US President Barack Obama.
What is being done to protect us from deepfakes?
While it’s true that deepfakes are getting harder and harder to detect, there are a number of organisations taking steps to combat them.
Meta hosted the Deepfake Detection Challenge last year. The social media giant offered prizes of up to $500,000 to the creation of new tools and technologies to detect deepfakes.
Another company known as Operation Minerva uses an algorithm to compare potential deepfakes to videos that have already gained a “digital fingerprint”. This shows the company if a realistic looking deepfake is using a video which has already been posted onto the internet.
Although it’s not easy to detect a manipulated video or audio file, it’s not impossible. As the technology gets more advanced we need to continue to remain vigilant in every aspect of what the internet presents us.
GlobalData is the parent company of Verdict and its sister publications.