Since the 2019 general election, artificial intelligence (AI) has advanced at a rapid pace.

Convincing ‘deepfakes’, —generated video or audio that has been altered to misrepresent someone—have become easier and cheaper to produce. Deepfakes are becoming almost indistinguishable from reality, especially with generated audio. Elsewhere, while ‘deepfakes’ rely on machine learning, ‘cheapfakes’ are manipulated AV (audio and video) which are easily made on smartphone apps for little to no money.

‘Deepfakes’ and ‘cheapfakes’ have proliferated substantially, resulting in 70% of parliamentary MPs worrying that advancements in AI will increase the spread of misinformation, according to the Institute for Government.

Election problems with deepfakes and cheapfakes

The vast quantity of AI-generated content can swamp fact-checkers and distract us from what politicians are actually saying. This issue is compounded by the rising number of people using social media as their primary source of news. AI advancement has the potential to fundamentally alter our political landscape, which is why the upcoming election can be seen as a ‘guinea pig’ of sorts for future elections. The Centre for Policy Studies has labelled this election as the UK’s “first deepfake election.”

With GlobalData forecasting the AI market to reach $900bn by 2030, deepfakes are only going to improve in line with the increased availability of training data and machine learning. While companies and governments have made significant strides to limit the spread of AI-generated video and audio, not enough has been done to counter the spread of malicious political deepfakes.

Misinformation through deepfakes is spread by many different actors and is often created for light-hearted purposes. Without disclaimers, however, not everyone can perceive what is fictitious especially those who have cognitive impairments or those who are not tech-savvy. In many instances, the best you can say is that the content isn’t real, as it is hard to verify deepfakes. General misinformation can also heighten distrust in public institutions and the media. This is especially true for British media, as the UK public has the second lowest trust in the world for their media according to a King’s College survey.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Deepfakes and geopolitics

Russia and China have been accused of spreading deepfakes and creating bot farms to influence British politics as far back as the Brexit referendum. Professor Joe Burton of Lancaster University points out that a nation like Russia doesn’t have to advocate a specific policy position, as widening divisions weaken British resolve. The proliferation of deepfakes can also dissuade many from voting, especially if they feel alienated from mainstream politics.

Deepfakes have already affected the domestic and international news cycle. The BBC reported that AI-generated content pushed rumours suggesting that a major scandal caused Rishi Sunak to call the election early, as well as the false claim that Sir Keir Starmer failed to prosecute Jimmy Saville.

Furthermore, videos on TikTok have misrepresented polarising views on both sides like national service, Brexit, and transgender rights. An investigation by the BBC has revealed that more than 100 adverts were found on Facebook impersonating the prime minister, as well as digitally altered audio of Sir Keir Starmer yelling at his colleagues. Even last week the Reform party had to deny that an altered video of Nigel Farage playing Minecraft was real.

There are also other international examples. As 900 million Indians went to the polls, Prime Minister Narendra Modi stated that AI-generated deepfakes were a significant concern after an altered video circulated online.  A more drastic example occurred earlier this year in America, where democratic organiser Steve Kramer faced a ‘first-of-its-kind’ penalty of $6m when he used a deepfake of Joe Biden’s voice to tell prospective democratic party voters not to vote in the primaries.

What can be done?

How can social media companies regulate this? There is a fine line between freedom of speech and disinformation peddling, especially when it comes to politics. Firms like Meta have been accused of profiting off misinformation and deepfakes, as many argue that Facebook either doesn’t have an effective vetting process or that they deliberately let misinformation through to boost engagement and revenue.

X is a good case study of both sides, as in 2019 it completely banned political advertising as an incentive for politicians to earn and not buy their reach, as well as to combat misinformation. After Elon Musk acquired the company, it became less stringent on content moderation. An X spokesperson previously stated that it couldn’t take down the aforementioned altered audio of Keir Starmer, as it couldn’t be proven to be fake.

Companies that design AI have made significant steps to combat election misinformation. Uncharacteristically, Alphabet’s Gemini AI and OpenAI’s ChatGPT refer you to a Google search if you ask election-related questions. Elsewhere, Microsoft is developing a way of digital watermarking videos by using software from the Coalition for Content Provenance Authenticity (C2PA) and has endorsed the banning of AI in politics. Meta released a database of 100,000 deepfakes in 2020 to improve the way AI recognizes them, as well as labelling AI-generated posts with a tag saying “Imagined with AI.”

Self-regulation seems to be the course of action most Western governments are taking to avoid accusations of restricting freedom of speech, despite the worsening situation. This has led many to call for external regulation. The UK passed the Online Safety Act in 2024, which introduced standards for internet companies and a committee to advise Ofcom.

No UK election deepfake – yet

However, it lacked any significant material on both political misinformation and AI-generated misinformation. Across the pond, a series of bipartisan bills were passed in the US that aimed to limit the use of AI in official election material, yet none of the bills attempted to regulate social media. Both the public and private sectors agree that not enough has been done on both sides to limit the effect of deepfakes and misinformation. Self-regulation can be volatile and inconsistent, yet government intervention would surely violate freedom of speech. Although an election-altering AI/deepfake issue hasn’t arisen yet in this election, it is important to limit the spread of AI-generated content so that it doesn’t pose a threat to democracy in the future.