OpenAI has released a blog post describing the benefits of using GPT-4 in content moderation.
According to the company, GPT-4 can speed up the process of content moderation as well as instantly adapt to new policies and guidelines making for a streamlined consistent labelling system.
OpenAI has invited its customers with API access to create their own content moderation system with its AI.
GPT-4 can read a company’s guidelines and then learn to label offensive or explicit posts that go against such standards.
The company has also stated that discrepancies between GPT-4’s judgement and a human’s has been examined, and GPT-4 is then asked to explain its reasoning behind a judgement.
On top of more consistent and streamlined labelling of offensive material, OpenAI additionally believe that incorporating AI into content moderation can help ease the mental stress that content moderators can experience.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe mental burden of content moderation
According to research by Webhelp, content moderators can often experience the same levels of stress as first responders and paramedics which it says can then lead to long-term problems such as substance abuse or emotional dysregulation.
When asked her opinion on AI’s potential for content moderation, GlobalData analyst Maya Sherman confirmed that it is important to understand the “dual role” generative AI plays in both moderation of content and the production of harmful images or text.
For example, a recent study by the Center for Countering Digital Hate (CCDH) has found that Midjourney’s generative AI was easily prompted into producing racist and conspiratorial images.
This was despite the AI moderator system that Midjourney has to regulate appropriate use of the tool.
Furthermore, AI generated images were behind many conspiracy theories surrounding the cause of recent wildfires in Hawaii.
Sherman admits that the ability to “automate mass content generation” is helpful, especially considering the constant growth in userbase across many social media platforms, but that the testing of such technologies is critical.
AI, in Sherman’s words, is still “premature in its cognitive awareness and judgement” and can easily reflect human biases within its algorithms and software.
Despite this, she believes a combination of AI-driven software and human input can help moderate content efficiently and help mitigate the mental impact of such work.
The moderation of AI itself
The large data sets that are required to train large language models (LLMs) like ChatGPT also require meticulous moderation.
Just this month, Kenyan moderators have opened up about the mental toll of moderating data that trains ChatGPT.
Speaking to The Guardian, moderators revealed that they would read up to 700 text passages a day often including graphic sexual violence.
The outsourcing firm that hired out moderators to OpenAI, Sama, have now expressed regret in conversation with the BBC over taking the contract.
“The recent case of Sama and the exploitation of the Global South for content moderation is alarming,” expresses Sherman, “as humans need automated methods to cope with such ambiguous graphic content.”
Whilst AI may help speed up content moderation, even OpenAI admit that its GPT-4 does have limitations in content moderation.
It states that language models are still “vulnerable to undesired biases” and that humans must remain in the moderation process.