The UK government has announced it will no longer be forcing social media companies to remove “legal but harmful” content as part of its Online Safety Bill, just one week before it’s set to return to Parliament.
The controversial revision comes after the government was heavily criticised by activists and lawmakers that claimed the clause would have a detrimental effect to free speech online.
Before the revision, Big Tech companies like Meta, Twitter and Snap would have faced sanctions if they failed to remove content that was considered harmful, but not illegal, by the UK government.
“The bill will no longer define specific types of legal content that companies must address,” the Department for Digital, Culture, Media & Sport wrote in a statement.
“This removes any influence future governments could have on what private companies do about legal speech on their sites, or any risk that companies are motivated to take down legitimate posts to avoid sanctions.”
Social media companies are still required to remove all illegal content from their platforms – including criminal acts like harassment, stalking, fraud, sale of drugs and weapons, and revenge porn.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataHowever, they’ll now be able to set their own policies on how to deal with content that may be harmful but not illegal.
While this has been seen as a win for free speech by a number of activists, some experts believe that the change in rules has weakened the Online Safety Bill.
“Revisions to the Online Safety Bill fail to adequately protect children from illegal and harmful content,” Rob Blake, UK managing director of Channel Factor, the brand suitability partner to YouTube, told Verdict.
“With more kids than ever before accessing social media, it’s becoming increasingly clear that brands must take immediate action – or risk grave consequences for vulnerable young viewers on an unprecedented global scale.”
The Online Safety Bill was created with protecting children in mind. Earlier this year, it was ruled that social media content glorifying self-harm had contributed “more than minimally” to the tragic suicide of 14-year-old Molly Russell.
The teen’s death led officials to introduce a new criminal offence for assisting or encouraging self-harm, The Register reported.
Despite other factors in the Online Safety Bill remaining, like forcing social media platforms to publicly specify a minimum age of entry, some feel it’s not doing enough.
“Social media is broken and recent amendments to the Online Safety bill are hugely disappointing, only confirming that we are not making progress,” Sue Fennessy, the CEO and founder of social media company WeAre8, told Verdict.
Fennessy claims the axed plan to make big tech remove legal but harmful content “has watered down the bill and continues to expose the dangers people experience online.”
However, some are still seeing positives to the new bill and say it will make a positive change in online safety.
“Despite this week’s statements, the new legislation marks an important step in improving user safety,” Sarah Pearce, senior data privacy and protection lawyer for global law firm Hunton Andrews Kurth, told Verdict.
“Regardless of your stance on this issue, the new bill will likely lead to change in how online content is managed by large tech companies active in the UK.”
GlobalData is the parent company of Verdict and its sister publications.