Big Tech companies are releasing open-source terrorism-tackling software, allowing smaller companies to moderate content more effectively. These Big Tech companies are in the best possible position to build content moderation tools and tackle terrorism as they have the necessary rich datasets, expertise, and available investment.
Content moderation is a slippery and difficult challenge. Online extremists and terrorists hide in echo chambers on the internet, radicalizing vulnerable people. Even worse, recommendation algorithms used by social media sites can push extremist content to the most vulnerable people.
Dealing with online extremism and content moderation has historically been a losing battle, however, new regulations, collaborations, and open-source software promise positive progress to tackle terrorism.
Big Tech companies are presenting a united front
In 2017, Big Tech companies Meta (formally Facebook), Microsoft, Twitter, and YouTube (a subsidiary of Google) formed the Global Internet Forum to Counter Terrorism (GIFCT). The group created a joint database of terrorist content, which can be used as training data for content moderation systems. Both Google and Meta have used this database to create open-source moderation software.
Only this past week, Google released open-source software to help human moderators make decisions on flagged content. Google’s research group Jigsaw, which is dedicated to threat research, partnered with the UN group Tech Against Terrorism and used the GIFCT database to create the software. Meta has also provided anti-terrorism content moderation software. In December 2022, it released open-source software, trained using the GIFCT database, which highlights images and videos for urgent review by a human content moderator. Both these tools can be used by any business, helping to make online spaces safer.
Regulation is driving investment in ethical tech
The EU and UK are clamping down on online extremism and content moderation with the EU Digital Services Act (DSA) and the UK Online Safety Bill. This rise in regulation is prompting investment in content moderation by Big Tech, even at a cost. For example, Google’s research group Jigsaw, which is dedicated to threat research, is operating at a financial loss. However, Jigsaw chief executive Yasmin Green told the Financial Times “there is an understanding that there is a long-term business return […] Google needs a healthier internet.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe DSA, approved in July 2022 and expected to become law in 2024, sets out new obligations for content moderation, transparency, and algorithmic accountability. It targets “Very Large Online Platforms” and “Very Large Online Search Engines”, requiring them to disclose the number of removal orders and report on how they use automated content moderation tools. Similarly, the Online Safety Bill requires large social media platforms, search engines, and messaging apps to tackle illegal content, giving regulators (Ofcom) additional powers to take robust action on terrorist activity.
While Big Tech will be the most scrutinized under these bills, creating open-source software is still necessary for a safe internet ecosystem. Terrorists may be pushed towards smaller platforms, which do not have the data resources or investment capabilities to moderate content as effectively as Google and Meta. Making these tools open source puts all businesses on an equal footing, helping smaller, more vulnerable platforms enhance their protection.
Open-source tools are available for everyone, including terrorists
Open-source technology is part of the bedrock of the tech community, allowing faster developments through the sharing of software breakthroughs. However, unfortunately, when it comes to content moderation, terrorists and extremist groups can also study open-source software to try and circumvent it.
In July 2019, the UN group Tech Against Terrorism highlighted Gab, an alt-right social media platform, which used source code from Mastodon, an open-source software for running decentralized social media networks. Gab used the source code against the wishes of the Mastodon developer community. According to Tech Against Terrorism, ISIS has also used Mastodon to build its own software and networks. Big Tech must collaborate, using the wealth of resources available to them, to progress content moderation for all players. It is also important that regulation pushes Big Tech towards increased investment, to stay ahead of extreme and terrorist content, and make the internet safer for users.