
Major social media platforms, like Meta’s Facebook and Elon Musk’s X (formerly Twitter), are facing a variety of growing threats, according to a new report.
The 2025 edition of GlobalData’s Social Media report highlights the looming spectre of increased regulation in particular as a likely challenge. It contends that the social media landscape is at an inflection point in terms of the reach of regulation.
Social media regulation
“2024 was a landmark year in social media regulation,” the report explains. “The US Department of Justice’s victory over Google in its antitrust lawsuit, the postponed ban of TikTok in the US and the arrest of Telegram CEO Pavel Durov signalled the end of an era for social media companies. The perceived leniency afforded to them by regulators has been replaced by unprecedentedly stringent rulings.
“2025 may see more severe legislation as Australia’s move to ban social media for those under 16 takes effect, while the UK considers banning smartphones in schools. The year may also witness the outcome of Google’s second antitrust lawsuit regarding digital advertising, which could pave the way for more antitrust legislation against Big Tech companies.”
Five key regulatory trends are outlined – antitrust, online harm, human rights, copyright and data privacy. The report notes efforts around the world to crack down on the dominance of social media companies in the adtech space, the proliferation of harmful content like hate speech and disinformation on social media platforms, perceived abuses and curtailing of freedoms and rights, the sharing if news content without paying publishers and perceived failure to adequately protect user data.
Social media moderation
Per the report: “Social media companies are under pressure to address the vast amount of harmful material on their platforms.” The failure to adequately moderate content could see platforms lose users and even face new regulatory measures or penalties.
Indeed, GlobalData suggests that emerging social media platforms are likely to benefit from the perceived failure to adequately address content moderation by bigger, more established platforms. “Consumers’ increased concern about the use of personal data and content moderation will instead see smaller decentralised apps become more popular,” its report says.
However, for platforms with especially large user bases, it is noted that moderation can require a major resource. For this reason – and to ease the mental health burden on human moderators – many have adopted community moderation and automated content moderation tools, with artificial intelligence (AI) playing a major role. Platforms associated with the alt-right and far-right, meanwhile, are avoiding rigorous moderation ostensibly in the name of free speech.
Fundamentally, though, failure to ensure adequate moderation is expected to result in users losing interest and regulators gaining interest.
Social media competition
While user numbers for major platforms like Facebook, Instagram, YouTube and TikTok continue to rise, the report notes that smaller, more disparate networks may increase in popularity as a result of regulatory scrutiny of large social media companies’ business practices and users’ desire for more intimate engagement. Several have made inroads into the market recently, including BlueSky, Damus and Mastodon.
“The regulatory crackdown on monopolistic behaviour means the super-app model pioneered by Tencent in China is unlikely to be as successful in the West,” the report says. “Consumer demand will also impact attempts to build pools of connected apps, such as Elon Musk’s bid to transform X (formerly Twitter) into a super-app to diversify revenue streams away from personalised ads. Consumers’ increased concern about the use of personal data and content moderation will instead see smaller decentralised apps become more popular.”