Misinformation is among the main challenges facing the media industry today.
It encompasses a broad spectrum of cases where the purpose is either malicious (so-called disinformation) or simply due to inaccuracy or honest mistakes. Whatever the purpose, information sources need to ensure that the content they spread is accurate if they want to remain reliable in the eyes of their audience and prevent any unnecessary harm to society.
An incident of misinformation gained international attention in May 2023, as a fake image posted on Twitter quickly went viral. The image depicted an explosion near the Pentagon, the headquarters of the US Department of Defense, and took everyone by shock. It was so realistic that the Department of Defence and the local fire department had to issue separate statements falsifying the alleged explosion. Even the US stock market reacted to the news as it suffered for a short period before recovering later. In the aftermath, experts argued that the image was most likely generated by AI. Naturally, the incident revived concerns regarding AI and what happens if its generation capabilities are used to spread misinformation.
The dark side of generative AI
The above case of misinformation is not the first and will not be the last. The growing number of media sources makes the battle against misinformation more challenging than ever as fake posts and images are getting more common and realistic.
The increasingly accessible and advancing generative AI capabilities also have a hand in this. Generative AI could further exacerbate the challenge of deepfakes: visual or audio content that is manipulated or generated using AI to deceive the audience. Unlike previous forms of special effects, deepfakes are increasingly hard to distinguish from the real thing. This constitutes a critical issue, especially for media and news sources that build their brands and reputations on trust.
Several actions were taken in recognition of this. For instance, influential business people like Elon Musk and Steve Wozniak signed an open letter called “Pause Giant AI Experiments: An Open Letter” created by the Future of Life Institute, arguing that AI labs should hold off on their work on AI that is more advanced than GPT-4 for a brief period due to safety concerns. The letter also highlights how intelligent machines can help exacerbate the spread of misinformation. Launched in March 2023, it got over 30,000 signatures as of July 19, 2023.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataUsing AI against misinformation
Despite these warnings, AI is widely being used by content and information providers for several purposes, including creating unique user experiences, automating repetitive tasks, and empowering creative and decision-making processes.
According to GlobalData, AI capabilities can also be employed in the fight against misinformation. AI can help identify fake news through anomaly detection. It can enhance fact-checking capabilities, find out about deviations in data, and understand whether a photo has been artificially altered. AI algorithms can also differentiate between human-authored and computer-generated articles by scrutinizing the suspicious content by looking at word patterns, readability, and more.
Nevertheless, GlobalData suggests that AI alone is not enough to fight the spread of misinformation. AI is not a sure-fire way of identifying misinformation or stopping the generation of fake content—it takes a combination of humans and AI to do this. With further advancements in AI on the horizon, its content generation capabilities will likely increase and with this, it will be even easier to create fake content. It is becoming clear that AI is here to stay, and media sources will need to find ways to fight against the spread of misinformation and prevent the unwanted consequences that come with it.