As the internet expands, so does the challenge of maintaining a safe digital space, and Online platforms face increasing pressure to identify and remove harmful content promptly.
While there has been significant investment in artificial intelligence (AI) to automate this task, the reality is that human content moderators (CMs) still bear the brunt of this responsibility. CMs, often hired by third-party companies on behalf of major social media platforms like Facebook, TikTok, and Instagram, are tasked with reviewing and removing harmful online posts to protect users. Yet, the toll this job takes on them is significant and often overlooked.
The hidden struggle of content moderators
Human moderators face unique challenges in their day-to-day work. Their responsibilities range from analysing content with offensive language to disturbing imagery of violence and exploitation. They must navigate complex contextual decisions that involve empathy and understanding, which comes at a high cost as they are constantly exposed to distressing material.
Studies have shown that jobs involving prolonged exposure to other people’s suffering can lead to conditions like secondary traumatic stress, emotional exhaustion, and psychological distress. Content moderation shares many of these risks and many CMs experience higher rates of mental health issues, such as anxiety, depression, and post-traumatic stress disorder.
The problem is compounded by the high-stress environment of content moderation as moderators must make quick, high-stakes decisions on content that could impact the safety of millions of users globally. CMs uphold policies set by tech companies to remove harmful content, which requires continuous focus and a deep understanding of guidelines. The pressure leads to emotional exhaustion, burnout, and a general sense of apathy, and many moderators report feeling underappreciated and isolated.
A landmark case involving Facebook (now Meta) in 2020 highlighted the reality of this distressing work environment. Facebook agreed to a $52m settlement to compensate moderators who developed mental health conditions because of their work. The case, brought forward by former moderator Selena Scola, highlighted the traumatic nature of content moderation.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataCan AI provide a solution to this problem?
Despite the severity of these psychological issues, the industry is still reliant on human moderators. While AI offers hope as a tool for content moderation, its limitations are evident. Algorithms struggle with nuanced content, such as satire or culturally specific references, and may overlook harmful material or wrongly flag benign posts. Therefore, human oversight has remained necessary, despite the risks to mental health.
The responsibility of online platforms
At present, AI does not offer a comprehensive solution and the burden will remain on human moderators. Therefore, online platforms must focus on taking greater responsibility for protecting the mental health of their moderation team.
Offering adequate mental health support, reducing workloads, and providing fair compensation are all important steps.
Greater transparency about the working conditions of content moderation will also improve industry expectations and standards. As the internet continues to rely on their labour, the tech industry must prioritise the well-being of CMs to ensure the protection of the internet doesn’t come at the cost of those that moderate it.
Related Company Profiles
TikTok
Meta Platforms Inc