OpenAI, the startup behind ChatGPT, has released new content authentication tools to help distinguish between real and AI generated content, including watermarks for AI generated voice content.
The tools are part of a wider effort from OpenAI to combat generative AI’s role in online misinformation.
The startup has also formed part of a steering committee at the Coalition for Content Provenance and Authenticity (C2PA) to create a standard metadata for tagging AI generated content.
C2PA metadata will be added to content created by DALLE 3, ChatGPT and OpenAI’s video generator Sora.
In a blog post announcing the new watermarks, OpenAI wrote that it hoped C2PA metadata would help bridge the trust between online users and AI developers.
“As adoption of the standard increases, this [C2PA metadata] can accompany content through its lifecycle of sharing, modification, and reuse,” according to OpenAI.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices,” the startup’s blog read.
Alongside its financial backer, Microsoft, OpenAI has also launched a $2m investment fund into educating online users about AI generated content.
This fund includes support from the Older Adults Technology Services, an organisation providing digital training to older people to boost digital equity.
AI generated content’s proliferation online has sparked debate and concern over the many global elections set to take place in 2024. Research and analysis company GlobalData estimates that around four billion people will vote in the next 12 months.
In its blog, OpenAI concluded that responsibility to prevent AI generated misinformation from spreading also landed on content creators and social media sites.
“While technical solutions like the above give us active tools for our defences, effectively enabling content authenticity in practice will require collective action,” it wrote.
Andrew Newell, chief scientific officer at biometric identification company iProove, stated that watermarking AI content, while important, was only part of the solution to tackling deepfakes.
“If deepfakes continue to develop at this pace they will soon erode any faith society once had in audio-visual content, and trust in material from any source, whether genuine or fake, will be destroyed,” he said.
Newell stated that more AI developers and researchers needed to access and use C2PA metadata to create an industry-wide safeguard.
“However, as [C2PA] is rolled out more widely, there is a danger that threat actors use the tool to fine tune their own deep fakes. That’s why there is a need for constant rapid evolution of defences to ensure you stay one step ahead of cybercriminals,” he stated.