The use of social media by terrorist groups to radicalise young people, and the rise of right-wing extremism, means tech companies are trying to monitor user content more closely.
Verdict has taken a look at some of the ways the world’s social media giants are tackling online hate crime.
1. Aritificial intelligence
Google, the world’s most popular search engine, has unveiled an artificial intelligence (AI) tool called Perspective to identify abusive comments online.
The software is currently being trialled by a range of news organisations, including The New York Times, The Guardian and The Economist.
“News organisations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labour and time,” said Jared Cohen, president of Jigsaw, the technology incubator created by Google.
The AI software uses an algorithm to filter abusive comments and is most effective when it is used in partnership with human content reviewers.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataPeople can feed specific words and phrases into Perspective to see how they’ve been rated. The word nigger — generally considered to be one of the most offensive words in the English language in certain situations — was rated as 82 percent toxic for example.
“All of us are familiar with increased toxicity around comments in online conversations,” Cohen added.
2. Demoting abusive content
Earlier this month, Twitter announced a “safe search” feature, allowing users to stop abusive threads from appearing in their feed.
Tweets from accounts users have previously have blocked or muted — even if they have been mentioned by other users elsewhere on the platform will also vanish from their feed.
People who have already had their account shut down for abuse will be prevented from creating a new username. However, the company added that this would only be enforced in the most severe cases.
“This focuses more effectively on some of the most prevalent and damaging forms of behaviour, particularly accounts that are created only to abuse and harass others,” said Ed Ho, the company’s vice-president of engineering, in a blog post.
Twitter, with a monthly active user base of 319m, has also become more active in suspending or banning abusive accounts.
Last year the platform suspended 235,000 accounts in six months for promoting terrorism. Daily account suspensions in 2016 were up 80 percent on 2015.
In July, Milo Yiannopoulos, the former right-wing commentator and tech editor of Breitbart News was permanently removed from Twitter after tweeting racist comments about Ghostbusters actress Leslie Jones.
Months later, Twitter temporarily suspended several accounts belonging to members of the so-called alt-right, including Richard Spencer, president of the National Policy Institute, a white supremacist think tank based in Arlington, Virginia.
Facebook, which boasts about 1.86bn monthly active users, has similar community standards.
“We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety,” the company insists.
3. Joint written agreements
In May last year, US tech giants including Google, Facebook, Twitter and Microsoft signed a code of conduct with Brussels, requiring them to review the majority of flagged hate speech within 24 hours and remove it if necessary.
“We value civility and free expression, and so our terms of use prohibit advocating violence and hate speech on Microsoft-hostedconsumer services,” said John Frank, vice president EU Government Affairs at Microsoft in a statement issued at the time.
“We recently announced additional steps to specifically prohibit the posting of terrorist content. We will continue to offer our users a way to notify us when they think that our policy is being breached. Joining the Code of Conduct reconfirms our commitment to this important issue,” he added.
By signing this code of conduct, companies must continue to train their employees and add to existing internal procedures in place to crackdown on illegal hate speech online.