SAN FRANCISCO (KCBS RADIO) – This week Mark Zuckerberg, CEO of Facebook parent company Meta, announced that it would make changes to its fact checking policy. Here’s how those changes are expected to impact users.
First off, Zuckerberg said in the video that: “It’s time to get back to our roots around free expression.”
Facebook’s roots date back to 2003 with the debut of Facemash, “an online service for students to judge the attractiveness of their fellow students,” per Britannica. Though that site was shut down, it prompted Zuckerberg to develop Facebook along with fellow students at Harvard.
While Zuckerberg harkened back to earlier days of Facebook itself, today’s users can expect their experience on the platform going forward to start looking a bit more like the experience on another popular social media platform – Elon Musk-owned X.
“We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program,” said Meta Chief Global Affairs Officer Joel Kaplan in a Tuesday message. “We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see.”
Kaplan explained that Facebook launched a fact-checking program in 2016 with the intention of having independent experts give people more information about things such as viral hoaxes. However, he said “that’s not the way things played out, especially in the United States,” and added that experts “have their own biases and perspectives.”
According to Kaplan, the Meta team believes that the new community notes model will be less prone to bias. Notes will be written and rated by contributing users and they will require agreement between people with a range of perspectives. Community Notes for Facebook will be phased in over the “next couple of months,” in the U.S. and the company plans to continue working on the feature over the course of the year.
“People can sign up today (Facebook, Instagram, Threads) for the opportunity to be among the first contributors to this program as it becomes available,” said Kaplan, who said Meta will strive to be transparent about the process.
He explained that “up until now” Meta has used automated systems to scan for all policy violations. This approach has resulted in what he said were “too many mistakes,” and censorship of too much content.
“For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes,” said Kaplan. Meta still plans to use the automated systems to address illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams, while it will rely on someone reporting “less severe” policy violations.
With the new approach, Kaplan said that people should not find themselves “wrongly locked up in ‘Facebook jail,’” as often. Moving forward, Meta plans to get rid of fact-checking control, to stop demoting fact checked content and to use less obtrusive labels that indicate that there is additional information for those who want to see it.
“We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” Kaplan said. He called changes made in 2021 to reduce the amount of civic content people saw on the site was a “pretty blunt approach” and that it will be phased back and replaced with a more personalized approach.
This will be based on explicit signals such as liking a post or implicit signals such as viewing a video. Meta is also planning to offer options for people to control how much political content they see.
Another planned change is intended to make the appeal process for enforcement decisions move faster. Kaplan said the company has added extra staff to work on the “frustratingly slow” process and will require more reviews before content is taken down.
“We are working on ways to make recovering accounts more straightforward and testing facial recognition technology, and we’ve started using AI large language models (LLMs) to provide a second opinion on some content before we take enforcement actions,” Kaplan added.
These announcements have been met with mixed feedback.
In a press release this week, the Foundation for Individual Rights and Expression (FIRE) non-profit said that the changes “mirror recommendations FIRE made in its May 2024 Report on Social Media,” and that they could “have major positive implications for free expression online.”
On the other hand, Helle Thorning-Schmidt from Meta’s oversight board, told the BBC Radio 4’s Today program that there were “huge problems” with what had been announced, including the potential impact on the LGBTQ+ community, as well as gender and trans rights, per a BBC report. She said that, in some cases, hate speech can turn into real-life harm.
Later this week, Axios reported that Meta is terminating its Diversity Equity and Inclusion programs, citing an employee memorandum. According to the report, “the move is a strong signal to Meta employees that the company’s push to make inroads with the incoming [President-elect Donald] Trump administration isn’t just posturing, but an ethos shift that will impact its business practices.”
DOWNLOAD the Audacy App
SIGN UP and follow KCBS Radio
Facebook | Twitter | Instagram