In January 2024, US voters in the state of New Hampshire received a call from President Joe Biden, urging them to refrain from voting in the primary election. Only, they didn’t. It was a robocall using deepfake audio to impersonate the President.
The likeness of the AI generated audio to President Biden’s voice (which can be listened to here) speaks to the growing sophistication of deepfakes, raising alarm bells about the potential misuse of the technology.
In the lead-up to major elections, concerns about the proliferation of deepfake content have surged, prompting investigations into the average viewer’s ability to discern between genuine and artificially generated media.
The challenge in detecting deepfakes lies in the technology’s increasing refinement. High-end manipulations, often focus on facial transformations, make it challenging for viewers to discern authenticity.
GlobalData analyst Emma Christy warns, “a significant number of people will be unable to discern deepfake audio from reality, with catastrophic implications for countries holding elections this year.
Christy cites a 2023 University College London study in which participants were only able to identify fake speech 73% of the time, only improving slightly after they received training to recognise aspects of deepfake speech. “The samples used in the study were created with relatively old AI algorithms, which suggests humans might be less able to detect deepfake speech created using present and future AI,” says Christy.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataA recent study, published in iScience, revealed that people struggle to reliably detect manipulated video content. Despite being informed that half of the videos were authentic, participants guessed that 67.4% were genuine.
As the ability to generate deepfakes has become more accessible, concerns about accountability and the use of deepfakes in deceptive campaigns, such as mass voter misinformation efforts, are coming to the fore.
MIT’s DetectFakes Project, developed as a research project, explores how well ordinary individuals can distinguish authentic videos from those produced by artificial intelligence.
The Kaggle Deepfake Detection Challenge (DFDC) enlisted the collaborative efforts of industry giants like AWS, Facebook, and Microsoft, along with academic institutions, to incentivize the development of innovative technologies for deepfake detection, awarding a substantial $1m to the competition winners.
The challenge posed by deepfakes goes beyond traditional fake news, as these AI-generated content pieces are more convincing and tend to create false narratives that resonate with individuals’ beliefs.
Former Google fraud czar, Shuman Ghosemajumder, warned of the societal concern surrounding deepfakes, emphasising their potential to damage individuals and influence public opinion.
Research indicates that people struggle to differentiate between real and deepfake content, with the potential for deepfakes to sow uncertainty and erode trust in genuine media.
A multi-faceted approach
Efforts to detect and prevent deepfakes are underway, with researchers developing software and proposing updates to election campaign fraud rules. However, the continual advancement of AI technology poses a persistent challenge, making it crucial to educate the public on the existence of deepfakes and how to identify them.
As deepfakes become more accessible and convincing, addressing this threat requires a multi-faceted approach, involving technological advancements, regulatory measures, and public awareness initiatives.
Such a big election year in 2024 may serve as a critical testing ground for society’s ability to navigate the challenges posed by AI-generated content.