Meta is the latest platform to scrap fact-checking, instead favouring ‘community notes’, where instead of moderators combing through social media content there will be notes placed on content by volunteers.

CEO Mark Zuckerburg announced that this move would “dramatically reduce the amount of censorship” while recommending more political content. He claimed that fact-checkers were too politically biased and have destroyed more trust than they’ve created.

Donald Trump’s sweeping electoral victory likely impacted this decision, and Zuckerburg has restructured his firm to cozy up to the president-elect. Nick Clegg, former deputy prime minister of the UK, resigned a few days before the statement, ultimately being replaced by republican Joel Kaplan.

As Meta and Zuckerberg attempt to get closer to the incoming administration, they are completely overhauling their ways of moderating content. This begs the question, is community noting as we see it on X (formerly Twitter) better than traditional fact-checking?

The issues with fact-checking and the benefits of community notes

Fact checkers as a form of content moderation do have issues. They likely have opinions that influence their decisions on the job, giving certain sources higher weight than others.

Fact-checkers are also, ultimately, only one person. Community notes require multiple people with different opinions to agree with the note for it to appear under a post. These different people have a range of experience and knowledge that can assist the vetting process.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Community notes also allow for anyone, in theory, to add a note. Someone somewhere is going to know a lot about a niche issue, and the internet never forgets.

Community notes can also provide comic relief, with an example of this being when TIME said that a fault in a dam could be Ukraine’s Chernobyl, conveniently forgetting that Chernobyl is in Ukraine, which community noters quickly pointed out. Personally, I love some of the gotcha moments that community notes give, as they can quickly put people in their place.

Twitter and its history with community noting

Community notes started as ‘Birdwatch’, originally made to combat misinformation about the Covid-19 pandemic in 2020. It was made into the main form of content moderation after Elon Musk took over in 2021.

As someone who probably spends too much time on X, I don’t think community noting helps with misinformation. X has become misinformation central, with many of the community notes I see being politically and emotionally motivated. Posts with notes attached will also not be taken down unless they violate X’s terms of service, which became far less stringent under Musk’s leadership.

Community notes take time to be verified, as multiple people need to agree to the proposed note. The proposed note can only be implemented if multiple people from a “diversity of perspectives” can agree that a note is helpful. 96% of all fact-checking notes contributed by X users didn’t pass through to public view in 2022. This delay can allow for a rapid spread of misinformation before being properly fact-checked.

Similar delays occurred around posts concerning the Southport riots in the summer of 2024. During the riots, X users circulated a false rumour that the attacker was a Muslim immigrant. This was not the case, as the perpetrator was born in Cardiff to Christian Rwandan parents. The community noters were too slow to the draw and the posts resulted in further riots and violence across the country.

A point worth highlighting is how X classifies community noters. Due to their diversity of perspectives clause, X uses an algorithm to place volunteers along an opinion spectrum based on their voting history. X must filter out some ratings that are deemed to be “low quality”, but it is unclear what that means. Who decides what kind of user is low-quality? The system employed by X is ripe for political jerry-rigging.

Disinformation, the purposeful spread of misinformation for personal gain, is now rife on the app. X’s blue check, which allows you to earn money from engagement on posts, makes disinformation financially beneficial to subscribers. But hey, don’t take my word for it, take the word of former and current X employees who have stated that community noting “doesn’t address lies that are divisive”.

Community notes in practice

Analysis from NewsGuard of 250 common unsubstantiated claims about the Israel-Hamas conflict was viewed more than 100 million times but failed to receive notes 68% of the time. Accusations of political bias towards conservative causes are regularly levied against X, yet that, if true, is a function of the app and how it is run, not the system of community noting.

X’s community notes system, ultimately, encourages mob thought. A content moderation system that relies on the opinions of multiple people will not result in factual notation, rather it will respond with emotional and political notes that provide relatively little value.

Facts or fiction?

A clear example of this lies with the CEO himself. Musk regularly provides takes that are either misleading or are built off of misinformed information. Despite this, he has a group of loyal supporters who regularly vote down notes on his tweets. Peer review by popular decisions will result in popular people and causes not being held to account, while unpopular factual information will be left behind.

For community notes to be done well, Meta must have a group of permanent neutral fact-checkers to peer review posts to place community notes on together, akin to the group of fact-checkers they are currently in the process of firing. Ultimately, however, Zuckerberg likely doesn’t mind a platform with more misinformation on it, so long as it doesn’t upset the new government.