The emergence of generative artificial intelligence (AI) is aiding bad actors in spreading misinformation and deepfakes.
Generative AI can be used to defame public figures, manipulate public opinion, and infringe intellectual property. There is an immediate need to address these challenges, which require collaboration among regulators, tech companies, and researchers. Failure to implement timely regulations on generative AI could not only cause societal damage but also strangle the technology. Consumer confidence in generative AI, after the damage is done, will be exponentially difficult to regain.
Deepfakes and misinformation are on the rise
Deepfakes and misinformation have risen in recent years, given the advances in AI causing financial and reputational damages. Deepfakes are visual or audio content that have been manipulated or generated using AI, with the intention of deceiving the audience. It can be used to alter an existing video or alter the audio in an existing video. In May 2023, in China, an individual used AI-powered face-swapping technology to impersonate a friend and convince the victim to transfer $622,000 (CNY4.3 million).
Another example is the fake images of an explosion near the Pentagon that went viral on May 22, 2023. Several verified Twitter accounts, including one claiming to be associated with Bloomberg, shared a tweet showing images of black smoke next to a Pentagon-like building. The tweet reportedly had a brief impact on the stock markets, as the Dow Jones Industrial Index dropped 85 points within four minutes of this news, and then rebounded quickly when the images were deemed fake. This is a classic example of the dangers of the pay-to-verify system offered by Twitter and the growing use of AI-generated fake content.
Public figures are highly vulnerable to AI-generated fake images. In 2023, fake images of Donald Trump being put behind bars, and Pope Francis dressed in a puffy, bright white coat went viral. Geoffrey Hinton, Google’s AI veteran cautioned about the misinformation crisis that could happen due to misuse of generative AI. He emphasized that the internet will be filled with fake images, videos, and photos that people will not be able to spot. It could blur the lines between what is real and fake. Similarly, the World Health Organization (WHO) also warned of the misuse of AI in healthcare and raised concerns about how data will be used to train AI models that generate misleading or inaccurate information.
Implications of AI-generated fake content
Advancements in AI technologies such as deep learning and computer vision are driving the creation of deepfakes and image manipulation algorithms. The fundamental issue here is that neither self-regulation by companies nor coordinated global regulation has maintained pace with the advances in AI. While AI is in its infancy, it will soon be mature enough to cause real damage unless appropriate regulations are formulated.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe implications of AI-generated fake content could be malicious, leading to the spreading of misinformation. It could also increase the risk of cyberattacks as hackers could use AI-generated content to create personalized spam messages or images with hidden dangerous code. The AI-generated fake content could also raise ethical concerns such as plagiarism, bias (gender, religion, and race among others), and intellectual property misuse, among others.
A multi-pronged approach is a must for generative AI
In May 2023, Google announced its ‘About this image’ feature that will detect fake images. This feature can provide users with the necessary details about when the image was first indexed by Google, where it first appeared, and where else it appears on the internet. This way the users can test the authenticity of the images. Google also plans to label pictures as AI-generated images using this tool and is working in collaboration with Midjourney and Shutterstock.
Tech giants must work together with regulators to develop best practices, create industry standards, and implement measures to mitigate the spread of AI-generated fake content. Early collaborations are emerging, for example, the Defense Advanced Research Projects Agency, Microsoft, Intel, and IBM are developing technologies to detect fake content.
Developing robust fake content detection algorithms and raising public awareness about the potential dangers of AI-generated content is imminent. Regulators and tech companies must encourage the ethical and responsible use of AI.
China is already leading the race to regulate AI, while discussions are ongoing in Europe and the US. More countries will join the efforts as they realize the potential dangers of AI.
Related Company Profiles
Intel Corp
Shutterstock Inc