2024 is a big year for elections – the sheer number happening this year has thrown into relief how artificial intelligence (AI) and misinformation might impact current and future polls.
There is a fear that, as the technology develops, it will be able to produce hyper-realistic deepfakes of people and events, presenting serious misinformation challenges. AI tools can already be very accurate when imitating voices, and it won’t be long before AI-generated videos will become truly convincing.
Elections and AI generated imagery
The dangers that AI-generated imagery poses to women and girls have already been covered.
One of the major critiques featured in this article was that the overwhelming majority of coverage about OpenAI’s text-to-video platform, Sora, was about the geopolitical implications of generative AI. This was the case, at the time of writing, for all the top 100 articles shown by a Google search of “sora openai”.
While this discussion is important to have, this article will take that critique one step further to suggest AI isn’t even the most important thing we should be discussing concerning misinformation. AI-generated misinformation is, after all, simply a symptom of a wider misinformation problem in contemporary politics that should be grappled with.
Old school fakery
In May 2024, Tortoise Media reported how a video of Rachael Maskell, Labour MP for York Central in the UK, telling a crowd that the UK must “keep going” with mass migration had been becoming very popular in right-wing spaces online. The video in question looked fake; Maskell’s words and mouth movements didn’t appear to match, and the BBC logo looked to be moving, suggesting that this video could have been generated by AI, which would have made it the first of its kind for a UK election.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataHowever, it turns out that the video and address to the crowd were real, but it was from 2015 at a Refugees Welcome Rally in York. This didn’t stop the video being presented as if it was made recently.
Misinformation does not rely on accuracy
That this video was presented as recent despite being almost a decade old demonstrates that misinformation does not rely on accuracy in how events are portrayed but on biases and a lack of media literacy. An entire worldview could be deconstructed as being based on lies and people will still believe it.
This is the phenomenon that misinformation requires to be effective. It doesn’t matter how realistic a piece of media is; if the message behind the media is agreeable to certain people, especially at a highly politically charged time, then they’ll believe it.
Placing restrictions on the ability of AI to generate political content thus doesn’t achieve anything other than placating concerns about generative AI capabilities that misunderstand why it’s a threat. Tackling AI-generated misinformation instead requires tackling why people are willing to believe certain information even if it turns out to be false.
Combatting misinformation
Education is the key to tackling misinformation. This can be in the form of greater emphasis on literature studies in schools, encouragement of humanities degrees, or national advertising campaigns supported by free and accessible resources. But this would take a long time.
Supplementarily, social media can play a role in providing the right media and political education just as much as it plays a role in spreading misinformation. For example, TikTok has been written about on Verdict before as a place of media literacy and debate. This has the advantage of having people already be engaged with the platform, making it a good place to start for formal organisational and informal individual efforts to tackle misinformation.
As Alexi Mostrous wrote in the aforementioned Tortoise Media article, “AI fakery might grab the headlines, but often it’s the more prosaic types of misinformation that gain the most traction online.” It is important to look past the headlines going forward and use the tools at our disposal to tackle misinformation.
Related Company Profiles
Google LLC