GenAI image creation tools from leading AI companies, including Microsoft and OpenAI, will produce election disinformation in 41% of cases, according to a new report.
Researchers from the Centre for Countering Digital Hate (CCDH) found that leading GenAI image tools regularly produced photos promoting false claims about candidates and depicting election fraud.
Despite their policies prohibiting the creation of misleading content,
some of the AI tools generated images featuring convincing depictions of Donald Trump playing golf with Vladimir Putin, Joe Biden lying in a hospital bed, and angry voters destroying polling stations.
The CCDH, a nonprofit that monitors online hate speech, said that there was a potential for these AI-generated images to serve as “photo evidence”, which will pose a significant challenge to preserving the integrity of elections.
According to the CCDH, the researchers’ tests found that the AI tools were most susceptible to depicting election fraud, like smashed ballot boxes, rather than misleading pictures of the US President.
Researchers tested Microsoft’s Image Creator, OpenAI’s ChatGPT Plus, Stability AI’s Dream Studio and Midjourney.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataOpenAI’s ChatGPT Plus and Microsoft’s Image Creator blocked all AI prompts relating to election candidates, according to the report.
Midjourney failed in 65% of its test runs, performing the worst out of all the tools tested.
The CCDH called for AI platforms to provide responsible safeguards to prevent users from generating images, audio, or video that is deceptive, false, or misleading.
Researchers said AI platforms should also provide clear and actionable pathways to report those who abuse AI tools to generate misleading content.
The report calls for policymakers to pursue legislation that makes AI products safe by design, transparent, and accountable for the creation of deceptive images that may impact elections.