“Things don’t necessarily have to be true, as long as they are believed.”
Those are the words of Alexander Nix, former CEO of Cambridge Analytica. In 2016, the UK-based data analytics company, now synonymous with disinformation, illegally harvested 87 million Facebook profiles whilst working closely with Donald Trump’s campaign team.
In February 2023, an undercover investigation exposed Cambridge Analytica’s cooperation with Israeli hackers, “Team Jorge”, during the 2015 Nigerian election. Utilising powerful software programs to profile and micro-target voters with personalised political ads, “Team Jorge” has overseen campaigns in 33 presidential elections worldwide – 27 of which were successful.
Disinformation has existed for centuries, but artificial intelligence (AI) has already intensified its impact and consequences. Amid a deluge of fake news and alternative facts, AI systems are predicted to proliferate the democracy-disrupting trends of the previous decade.
Weaponising AI for disinformation
“Team Jorge” may have been unmasked and Cambridge Analytica disbanded, but these organisations merely form the tip of a global iceberg of data privacy violation. According to a University of Oxford study, social media was used to manipulate political opinion and spread misleading propaganda in at least 81 countries in 2020. Methods include political chatbots, micro-targeting, content-shaping algorithms, cloned human voices and facial recognition databases.
At the simpler end of automation technology, bots use machine learning (ML) methods to generate realistic profile pictures for fake social media accounts. Bot-operated accounts have shared disinformation en-masse, from incorrect voting dates and polling locations to messages exploiting doubts about the efficacy of political processes among marginalised voters.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAI will enable targeting of the small percentage of the electorate that remain undecided on election day with “the exact message that will help them reach their final decisions,” according to Darrell West, senior fellow at Brooking Institution’s Center for Technology Innovation. Such messages are likely to stoke discontent on divisive topics such as abortion, immigration or transgender issues.
While bots are able to produce “deepfakes” of political figures, generative AI platforms have the capacity to create entire campaign videos. On 25 April, the Republican National Committee (RNC) responded to President Biden’s re-election announcement with an AI-generated video depicting a dystopian future for the USA should the incumbent Democrat retain office in 2024.
The US presidential vote is far from the only crucial election scheduled for 2024. Throughout the year, more than one billion voters will head to the polls across India, the UK, European Union, Russia, South Africa, Mexico, Indonesia, South Korea and Ukraine in a rare convergence of election cycles.
Many of these elections are set to be highly polarised across political, socioeconomic and ethnoreligious lines – and recent elections in Thailand, Nigeria and Turkey have demonstrated the dangerous co-dependency of disinformation and polarisation.
Ahead of the string of crucial elections taking place in 2024, here is how various governments are regulating AI.
US presidential election: 5 November, 2024
On 16 May, Sam Altman, OpenAI CEO, called on the US Congress to prevent AI’s potential to cause “significant harm to the world” and “manipulate” the US presidential elections. Senator Dick Durbin called it “historic” that a private sector company was coming to the Senate Judiciary Committee asking for regulation. Altman’s position has led some industry leaders to raise the question as to why OpenAI decided to release its AI software to the public before thoroughly evaluating its safety.
Whether Biden’s Republican counterpart is Ron DeSantis, Nikki Haley or the returning Donald Trump, the 2024 US presidential election will be contentious, personal and polarised. Disinformation has been spouted from the highest level, with false claims made over the integrity of the 2020 election results. Adding AI to the mix will only increase this ahead of November 2024.
On 2 May, Democrat Congresswoman Yvette Clarke introduced a bill requiring full disclosure of AI-generated content in political adverts. While this proposed bill is largely a politicised response to the RNC’s anti-Biden video, it holds the potential to mandate the clear identification of automated, potentially misleading campaign tactics. Republican Sean Cooksey’s belief that AI ads can be regulated by existing laws shows that cross-party action may be delayed by debates over regulation versus innovation.
Followed closely by China, the US is a global leader in AI development. Many believe that responsibility for its regulation must be exemplified by OpenAI, Google’s DeepMind, and other leading US companies.
European Union election: 6-9 June, 2024
As a trading bloc rather than a country, EU elections have not been tinged with the same levels of polarisation and disinformation. But with more than 400 million eligible voters, the European Parliament elections are significant as the largest transnational vote in the world. Current president Ursula von der Leyen has not confirmed if she will re-run for office, but is predicted to have the continued support of key member states if so.
Along with China and Brazil, the EU has led the global charge in AI rule-making. On 11 May, the EU’s Internal Market Committee and Civil Liberties Committee drafted an Artificial Intelligence Act that would ban biometric identification systems in public places and predictive policing systems.
MEPs also flagged the capacity for “high-risk” AI to scrape biometric data from social media or CCTV footage, creating facial recognition databases as a major privacy and human rights violation. This “high-risk” list will include AI systems designed to influence voters in political campaigns.
As seen with the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a significant challenge for US Big Tech companies. EU privacy regulators are known to stay true to their word, evidenced by the global record $1.3bn fine issued to Meta on 22 May, 2023, for data privacy breaches.
UK election: December 2024/January 2025
Unlike the EU, the UK government initially said that it will not enact any new legislation to control AI, nor will it create a new regulatory body. This forms part of Conservative Prime Minister Rishi Sunak’s promise of a “pro-innovation” approach to AI – although the PM has since made a more cautious statement about putting “guard rails in place”.
Unlike the USA, political ads are banned from British TV and radio. Instead, parties are given airtime via debates and broadcasts such as BBC Newsnight. Despite the Advertising Standards Authority’s (ASA) calls for political advertising to be regulated, there is no regulatory body for political ads in the UK, which are still highly prevalent on social media and online websites.
Disinformation in the build-up to the UK’s 2019 general election involved deceptive Twitter accounts, doctored videos, and unverified websites. Recent political campaigning has oscillated wildly from the Tories’ levelling up ads – which were ruled a breach by the ASA after spending £2.15m of taxpayer money on billboards, posters and local newspaper ads – to Labour’s ads attacking Sunak in April 2023.
Many industry leaders are concerned that current UK legislation against disinformation is unprepared for the wide-reaching, opinion-shifting impact AI is likely to have on the electorate. If ratified, the controversial Online Safety Bill would compel communications platforms to restrict “legal but harmful” content.
The Bill could curb certain sources of disinformation, although critics have pointed out that it would invade citizens’ privacy and freedom of speech through weakened end-to-end encryption on messaging applications like WhatsApp.
India election: April/May, 2024
India’s information war has escalated on an unprecedented scale. Political parties’ cyberarmies bombard voters with fake news and divisive propaganda across TV and social media, from the anti-Muslim “love jihad” conspiracy to false messages about child abductors on WhatsApp led to the lynchings of more than a dozen people in 2018.
Current Prime Minister Narendra Modi leads the nationalist Bharatiya Janata Party (BJP). Much of the BJP’s campaigning is built on Hindu-Muslim polarisation, aiming to win favour among Hindus who account for 80% of India’s electorate. The BJP have also attempted to smear left-leaning opposition party the Indian National Congress (INC) as pro-Muslim through disinformation campaigns.
After China, India represents the world’s second-largest internet market. Technological development is a strategic area of growth, which is predominantly why the Indian government is not considering any regulation of AI, according to IT minister Ashwini Vaishnaw. Majoritarian sentiment and polarising disinformation are predicted to be increasingly prevalent in the lead-up to India’s 2024 elections.
The current state of democratised political information is delicate. AI developments are already becoming a pretext for censorship, whilst adding to the mounds of digital disinformation clouding public judgement.
Ahead of the crucial convergence of elections in 2024, AI legislation passed in the coming months will set a precedent that either keeps this technology in check or leaves it to disrupt democratic processes on a global scale.