Election Security
,
Fraud Management & Cybercrime
,
Fraud Risk Management
Facebook Unveils Community Notes Program But Has Done Little to Curb Fraud
Meta has decided to end its fact-checking program. Meta CEO Mark Zuckerberg announced significant changes to the company’s moderation policies and practices on Tuesday, attributing the shift to a renewed commitment to free speech.
See Also: Cyber Security in the Age of Digital Transformation: A Reality Check
The company also reversed a policy that had reduced the amount of political content in user feeds. While these controversial consumer and voter protection programs sparked political firestorms in the U.S., other countries including Singapore and Australia have been advocating for stricter measures to hold social media companies accountable for scams and misinformation.
While Meta says the move is part of its push for transparency and open discourse, others see it as a step backward in global efforts to combat financial fraud and disinformation. It comes at a time when lawmakers worldwide are demanding greater responsibility from tech giants in the rollout of artificial intelligence tools, the use of deepfake technology and protections for user privacy.
Zuckerberg revealed plans to replace the fact-checking system with a community notes model, similar to the approach used by X, formerly Twitter. The decision means Meta will stop proactively labeling posts that peddle hate speech, politically sensitive or pseudo-scientific claims. Instead, automated systems will review the “high-severity violations” but only in response to user reports.
Since 2016, Meta’s fact-checking program referred questionable posts to independent organizations to assess credibility. The program drew criticism from President-Elect Donald Trump who called it “censorship of right-wing voices.” Speaking after the changes were announced, Trump told a news conference he was pleased with Zuckerberg’s decision and that Meta had “come a long way,” according to a BBC report.
Zuckerberg and other Big Tech executives have been cozying up to Trump since the former president won his re-election bid on Nov. 5, 2024. Zuckerberg dined with Trump at his Mar-a-Lago resort in Florida on Nov. 27, 2024, and two weeks later, Meta donated $1 million to the Trump inauguration fund. Outside of the U.S., the CEO said third-party fact-checking will continue in the United Kingdom and European Union for now, likely due to the heightened focus on Big Tech by government regulators.
Critics argue Zuckerberg’s move to “get back to our roots around free expression” could open the door to more disinformation, scams and fraud. Anytime you relax controls, fraud goes up. With little government regulatory oversight, the decision does little to solve a longstanding problem: the lack of accountability by social media platforms. Moreover, under Section 230 of the Communications Decency Act, which dates back to the Internet’s infancy in 1996, Congress explicitly protects social media platforms from liability for the content that users post on their sites. Unfortunately, social media has become a much darker place over the past two decades with the rise of pig butchering, money muling, crypto scams, disinformation and hate speech – just to name a few.
While the decision shifts policy optics, it’s unlikely to protect users from false information. When has Facebook – or any social media platform – ever truly cared about the truth? In fact, if you ever tried to report scam content to Meta, you won’t find “scam” or “fraud” as a reporting option. (see: Ever Tried to Report a Scam on Facebook? Good Luck!)
Since the U.S. presidential election of 2016, critics have accused Meta’s “fact checkers” of silencing opposing views while pushing their own agendas. Contributors to community notes will not have the perceived authority as fact-checkers, and this may help some people see social media posts for what they are worth, rather than accepting everything posted as the truth. The new program fits with a well-honed model used by social media companies, which have always relied on engagement metrics over accountability. But Meta’s move raises questions about its commitment to user safety. While some may perceive it as a progressive change, the decision once again highlights Meta’s lack of commitment to curbing fraud.
Community Notes
Community notes are written by platform users who meet the eligibility criteria to participate in the program. A note is displayed after users achieve a majority consensus on its accuracy. It includes a correction, often supported by a link to an online source that verifies the fact check. It can be viewed as a product or seller review. The more your contributions align with what the algorithm considers relevant, credible and helpful, the greater influence your input may have on community notes decisions or visibility.
But many believe the community notes process is slower and will lead to more misinformation going without clarifying notes because of a failure to reach community consensus. According to a report from Washington Post, only 7.4% of notes proposed on X in 2024 related to the election were posted – and that proportion dropped to just 5.7% in October.
Whether Meta can address the existing issues with community notes remains to be seen. The test of any system is how it works in practice. For now, fraud fighters must continue their battle independently. Countries such as Australia have made significant strides in involving social media companies in the fight against scams. For the sake of free speech, the U.S. still has a long way to go.