Meta's decision to phase out independent fact-checking teams for its social platforms has continued to draw scrutiny.  AFP
Meta's decision to phase out independent fact-checking teams for its social platforms has continued to draw scrutiny. AFP

Will Meta’s reversal on content moderation pay off?



One of Meta's first major announcements of 2025, the move to phase out fact checkers and replace them with a community notes-based system, is still fuelling debate in the tech world.

Some have accused the social media giant of making the changes to try to curry favour with US President Donald Trump, who was banned from the platform shortly after the January 6, 2021 insurrection, when a mob of his supporters stormed the Capitol building in Washington.

Others, however, have viewed Meta's fact-checking decision as inevitable.

“I’m not surprised by the move,” said Neil Johnson, professor of physics and head of the dynamic online networks laboratory at George Washington University.

Neil Johnson, head of the dynamic online networks lab at George Washington University. Photo: GWU

The lab studies some of the world’s most dangerous online communities and their influence on users.

“We map out the whole ecosystem, not just Meta, but who connects to Meta and who Meta's communities connect to, and it's basically an untenable situation to have [independent] fact checking,” Prof Johnson said.

When Meta announced its new approach last month, it said it would emulate the strategy implemented by X, which uses a crowdsourced approach to content moderation known as “community notes”.

The feature allows users to provide more information and evidence contradicting posts or content that breaches the platform's terms of service. Posts on X deemed to be misleading by enough people are later given notes that explain the inaccuracy of the posts in detail.

Meta's changes to its fact-checking policies will be felt on its Facebook, Instagram and Threads platforms. Getty Images

Posts by government officials and celebrities have been among those flagged with community notes.

Prof Johnson said that while far from perfect, the community notes approach could be relatively effective at scale – something that Meta's older, independent fact-checking approach failed to do.

“If you have a groupsof people who have a range of opinions, not just the extremes, but a range of opinions, it is shown in experimental settings that that can soften extremes. So in principle, it can work,” he said. “The trouble is, if in those community notes there's one extreme and small representation of the mainstream, that can kind of escalate off in directions that you don't want.”

Prof Johnson said Meta had probably been planning on revising its content moderation approach for some time, but decided to use Mr Trump's return to the White House as an opportunity to quickly make the change.

“I’m not surprised that they almost profited from this moment to just kind of throw in the towel,” he said.

The concerns voiced by marginalised communities after Meta's decision – that an increase in hate speech and hostility might soon be prevalent on its platforms – are valid, Prof Johnson explained.

“It brings us closer to the reality of society,” he said. “There will be more of that extreme material, and it will become the job of us, the five billion users, to actually call out that behaviour.”

Meta has not yet introduced its community notes in the US, but that has not stopped commentators, analysts and users from speculating about how the changes might affect content on Facebook and Instagram.

During a recent appearance on the podcast On with Kara Swisher, Dave Willner, former head of content policy at Facebook, said that other changes in the company's platform are flying under the radar.

“It came out that they are also turning off the sort of ranking algorithms around misinformation or potential misinformation, which is going to really change how information flows through the system,” he said.

“You can think of it as changing the velocity with which certain kinds of misinformation spreads through the network … this feels to me like a much bigger deal in terms of the amount of content it affects and the amount of views that it affects, than whether or not fact checks are applied to a relatively small number of stories because of the scalability of the process.”

Meta has said it would be phasing in its community notes approach over the span of several months.

The company also said it would stop the practice of displaying full-screen warnings about potentially problematic content that users have to click through before viewing a post, and instead would be using “a much less obtrusive label”.

“We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias,” the company said.

Updated: February 11, 2025, 11:03 AM

Future Beat

Your round-up of the stories shaping tomorrow’s world

          By signing up, I agree to The National's privacy policy. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
          Future Beat