On January 7th, social media feeds were flooded with reactions to Mark Zuckerberg’s announcement of sweeping changes to Meta, most notably the end of its Fact-Checking Program in the USA. The news quickly reverberated globally, raising concerns about the spread of disinformation and misinformation—and, ultimately, information integrity.
This blog post examines the implications of Meta’s decision for Instagram, Facebook, and Threads, focusing on both the immediate effects in the U.S. and the broader impact in Brazil, a country at the forefront of efforts to regulate big tech and online platforms.
Beyond the practical consequences, we explore larger socio-political questions such as: What do these changes signal in an era of rising polarization? And, How will they impact vulnerable groups?
What Exactly Are the Changes Announced by Zuckerberg?
The end of Meta’s Fact-Checking Program is part of a larger shift in how major platforms address—or ignore—information disorder (a term encompassing disinformation, misinformation, conspiracy theories, propaganda, and more). Below, we break down the key changes and their broader implications.
The End of the Fact-Checking Program
Meta launched its Fact-Checking Program in 2016, following Donald Trump’s election, as part of an effort to rebuild public trust after the Facebook-Cambridge Analytica scandal. The scandal exposed how Meta had allowed Cambridge Analytica to access the personal data of over 87 million Facebook users without their consent. This data was used to create psychological profiles and support Trump’s campaign through targeted political ads. It also enabled Russian interference in the election, raising concerns that Trump’s victory may have been influenced under unfair conditions.
To repair its reputation, Meta funded over 90 third-party fact-checking organizations worldwide over the next eight years to verify political claims and reduce the spread of false or misleading information (Jingnan et al., 2025). The company also allocated approximately 100 million dollars to support certified fact-checking organizations.
Despite these investments, and much to the surprise of many of its partners—who only learned of the decision alongside the public on January 7th—Meta is now abandoning the program in the U.S. Though the company said that this will not apply to other countries “at this time,” it is widely expected that the policy will eventually extend worldwide with significant repercussions.
The most immediate consequence of ending fact-checking partnerships, which can already be perceived, is the significant reduction in funding for these initiatives worldwide. Research from The New York Times notes that organizations such as PolitiFact in the U.S. will lose 5% of their yearly revenue due to this measure, with others expecting cuts of up to 30%.
In a broader sense, this change suggests an increase in weakened efforts to combat the information disorder. According to the Global Risks Report 2025, information and disinformation rank among the most pressing global threats, second only to armed conflict and environmental disasters. This shift could exacerbate the already growing crisis of false and misleading information online.
Finally, the financial loss is not just for fact-checkers—it also raises concerns about the broader integrity of journalism and democracy. Natália Leal, CEO of the Brazilian fact-checking organization Agência Lupa, stated in an interview with NPR: “The end of this program represents a lack of transparency and a lack of the value of the work, the journalism, in the world and the work of fact checkers.”
Meta’s decision must therefore be examined not just as a business policy shift, but within the larger socio-political landscape—one shaped by the new U.S. administration and international efforts to regulate tech giants.
Replacing Fact-Checking with “Community Notes”
The second outstanding announcement from Mark Zuckerberg’s January video was that, instead of working with professional third-party fact-checkers, Meta will adopt a crowdsourced approach through a system called Community Notes. This system, similar to Elon Musk’s model on X (formerly Twitter), allows users to write and upvote or downvote notes that appear alongside posts flagged as false or misleading.
One may ask, does Community Notes actually work? On X, Community Notes contributors remain anonymous and must be approved before they can submit corrections. Eligibility criteria include: no recent violations of X’s rules, a verified phone number, and an account active for at least six months. The eligibility criteria do not mention an expected timeline for being approved.
However, before a correction is made public, it must be rated “helpful” by other contributors. As CBS News explains, “this is where things get tricky.” X’s “bridging-based algorithm” requires “ideological diversity” in votes before publishing a note. However, if an accurate correction is only supported by voters with similar ideological leanings, it may never appear publicly—even if the information is factually correct. Taking a step back, it is also unclear how exactly ideological diversity is measured by the algorithm.
The effectiveness of the program remains highly questionable. A report by Poynter found that only about 8.5% of the approximately 122,000 notes written since the program’s inception have been made public. Additionally, in 2024, the Center for Countering Digital Hate (CCDH) conducted an analysis of 283 misleading posts on X and discovered that 209 of them—74%—did not have accurate Community Notes visible to all users, leaving false or misleading claims about the elections uncorrected. These findings suggest that the system may be suppressing accurate information under the guise of achieving ideological neutrality.
Moving Content Moderators from California to Texas
Zuckerberg also announced that, as part of the efforts to reduce “over-censorship”, the company’s Trust and Safety content moderation teams would be relocated from California to Texas. Unlike international contractors and outsourced workers responsible for reviewing and deleting harmful content, these teams develop policies, technology, and resources to prevent user harm—effectively guiding content moderation worldwide.
Zuckerberg justified the move by stating that it would “remove concerns about biased employees censoring content” and “build trust in places where there’s less concern about team bias.” However, content moderation teams play a critical role in ensuring users do not encounter hate speech, pornography, or violent content. Weakening these policies could expose vulnerable groups to greater harm, as explored in the next section.
Zuckerberg’s statement about relocating Meta’s Trust and Safety content moderation teams to Texas may be more symbolic than practical. According to research conducted by The Guardian, (2025), Meta has been shifting operations to Texas for over a decade, making the announcement less of a major operational change and more of a political statement. Experts suggest this move serves two key purposes: first, to align Meta with Trump’s administration, which has repeatedly criticized content moderation as a form of “censorship”; and second, to signal opposition to progressive ideologies by relocating from Democratic-leaning California to a conservative stronghold.
This shift also raises a broader question: if California is considered biased due to its progressive politics, wouldn’t Texas, a deeply conservative state, also be subject to the same critique? Yet, in the post-truth era, such contradictions often go unchallenged, reinforcing the idea that bias is framed not as an objective reality but as a political weapon wielded selectively.
The Situation in Brazil: A Regulation-Seeking Government
Business is political. In his video announcement, Mark Zuckerberg stated that Meta is “…going to get back to our roots, focus on reducing mistakes, simplifying our policies, and restoring free expression.” While this messaging emphasizes “free speech” and “reducing censorship”—both seemingly used as objective concepts—it is important to deconstruct what they actually mean and entail in the political context in which this announcement was made.
The decision came immediately after Trump’s inauguration for his new presidential term and just days after Zuckerberg’s visit to Mar-a-Lago to meet with the then-President-elect. The timing has fueled speculation that Meta’s changes align politically with the new administration. This is particularly striking given the previously tense relationship between Trump and Zuckerberg—just last year, Trump threatened Zuckerberg with “life in prison,” accusing him of plotting against him in the 2020 election (Isenstadt, 2024).
Brendan Nyhan, a political scientist from Dartmouth College, described this dynamic as “performative fealty” for NPR—business leaders signaling loyalty to Trump’s administration, potentially as a strategic move to avoid regulation (Jingnan et al., 2025).
Yet Meta’s policy shifts don’t just affect the U.S.—they have far-reaching global implications, for example in Brazil, a country that has actively pushed for stronger regulations on big tech.
Zuckerberg seemed well aware of the global impact his statements would have. In his January statement, he said Meta would “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more” and that “Latin American countries have secret courts that can order companies to quietly take things down.”
In Brazil, many interpreted this as a direct criticism of the Brazilian government, particularly given the country’s recent legal actions against major social media platforms.
For example, in late 2024, X (formally Twitter) was temporarily banned in Brazil after its owner, Elon Musk, refused to comply with legal requirements to appoint a local representative. The Supreme Federal Court ruled that X had encouraged extremist, anti-democratic discourse and obstructed judicial orders. Musk, in response, framed the ban as an attack on free speech, claiming that “an unelected pseudo-judge in Brazil is destroying democracy for political ends.”
In response to these regulatory efforts, Brazilian civil society has pushed back against Meta’s changes, warning that weaker content moderation could increase harm to marginalized communities. One of the strongest voices in this debate is Nina da Hora, a computer scientist and founder of the Da Hora Institute.
Speaking to us for this piece, Nina stated:
“These actions were strategically thought out to show CEOs and Big Tech that they are dealing with a nation that has rules, laws, policies, and, most importantly, a society that is organized.”
The Brazilian courts have already responded to Meta’s announcement. The government has given Meta a grace period to explain how it will protect users before implementing its new policies. Additionally, the Federal Attorney General’s Office (AGU) has filed an extrajudicial complaint against Meta, citing concerns that the changes could disproportionately harm vulnerable populations. Jorge Messias, a representative of AGU, publicly demanded that Meta “categorically explain to Brazilian authorities how it will protect children, teenagers, women, and small business owners who rely on the platform.”
The statement brings to life one of the key concerns around the changes announced by Meta, which is the concern for these triggering and exacerbating harassment and online attacks against those who criticize the changes.
Nina da Hora, for example, faced significant backlash on Instagram after a guest appearance on Brazil’s Globo network, where they discussed the dangers of weakened content moderation.
“When Mark Zuckerberg released the video outlining Meta’s new stance—removing moderation and verification, especially for terminology that is crucial to protecting groups already subject to violence—concerns about online harm only expanded. These changes automate moderation completely, relying on user behavior to self-regulate. If you already have organized groups targeting individuals or mass-reporting posts simply because they disagree with the content, this is a huge red flag.”
After receiving a flood of hostile comments on Instagram, Nina disabled comments on their posts. However, the comments then spilled over onto the social media platforms of organizations they are affiliated with—including us (DPA) and Eureka, with whom they collaborate on a Book and Movie Club focused on reimagining the future of technology.
Nina is not the only high-profile figure facing retaliation for criticizing Meta’s policy shift. Erika Hilton, a Member of Brazil’s Chamber of Deputies, has been one of the most outspoken voices condemning Meta’s decision. She has even called on the UN to take action, arguing that Meta’s increasingly hands-off approach to content moderation makes it an accomplice to the spread of harmful content that violates Brazilian laws protecting marginalized groups.
Following her advocacy, Erika faced intense online backlash, including: accusations that she was “afraid of free speech”, harassment urging her to leave social media, and explicitly violent and transphobic comments questioning her right to public visibility and influence.
The attacks on Nina and Erika underscore a deeper trend in the post-truth era: The weaponization of skepticism against institutions and fact-based accountability. Lobo and Bolzan de Morais, 2021 describe this as “the fight of science against the post-truth”. As Bjola and Papadakis (2020) argue, post-truth environments favor emotional appeal and symbolic rhetoric over objective facts, leading to a world where truth itself becomes malleable.
One comment Nina received on Instagram encapsulates this thinking:
“Can you clarify who checks the fact-checkers? Obviously, if someone says something against me, even if it is true, it is my right to deny it until the end. They [fact-checkers] will still say that the person lied and deserves prison like in the times of the dictatorship. So, tell me, as a defender of the owners of the truth, who will say that the checkers are right, will there be a judge and proof?” (translated from Portuguese)
This type of reasoning fuels distrust in any system that seeks to combat information disorder. It enables actors to discredit factual information under the guise of questioning authority—a strategy that has been widely used in election denialism, vaccine misinformation, and authoritarian propaganda.
In Closing
Brazil’s pushback against Meta’s new policies reflects a larger global reckoning over the role of social media platforms in shaping public discourse. But beyond platform policies, this debate is part of a much bigger cultural transformation—one driven by deep-rooted social inequalities, geopolitical shifts, and economic tensions that fuel polarization and information disorder worldwide.
At the core of this crisis is a fundamental question: How do we address the underlying socio-political and economic forces that allow disinformation and polarization to thrive? We suggest we need first and foremost to confront the broader systems of power, inequality, and algorithmic influence that shape online and offline realities.
Zuckerberg’s announcement marks a turning point in how big tech companies approach content regulation. Whether this shift unleashes a new wave of disinformation and hate speech or not remains to be seen. But one thing is certain: the fight for information integrity in the digital age is far from over, and it cannot be won without addressing the deeper forces driving information disorder in the first place.
Sources
Berlinski, N., Doyle, M., Guess, A. M., Levy, G., Lyons, B., Montgomery, J. M., Nyhan, B., & Reifler, J. (2021). The effects of unsubstantiated claims of voter fraud on confidence in elections. Journal of Experimental Political Science, 10(1), 34–49. https://doi.org/10.1017/xps.2021.18
Bjola, C., & Papadakis, K. (2020). Digital propaganda, counterpublics and the disruption of the public sphere: the Finnish approach to building digital resilience. Cambridge Review of International Affairs, 33(5), 638–666. https://doi.org/10.1080/09557571.2019.1704221
Data-Pop Alliance. (2024, December 11). Book and movie club “(Re)Imagining Technologies: Paths to the Future”. https://datapopalliance.org/projects/title-book-and-movie-club-reimagining-technologies-paths-to-the-future/
Direitos digitais | Instituto da Hora. (n.d.). https://www.institutodahora.com/
Eureka. (n.d.). https://www.eureka.club/en
Funke, D. (2023, August 29). Why Twitter’s Community Notes feature mostly fails to combat misinformation. Poynter. https://www.poynter.org/fact-checking/2023/why-twitters-community-notes-feature-mostly-fails-to-combat-misinformation/
G1. (2025, January 10). Governo cobra explicações da Meta sobre mudanças na política de moderação de plataformas no Brasil. https://g1.globo.com/politica/noticia/2025/01/10/governo-vai-cobrar-explicacoes-da-meta-sobre-mudanca-na-politica-de-moderacao-em-plataformas-diz-ministro.ghtml
Gaber, I., & Fisher, C. (2021). “Strategic lying”: the case of Brexit and the 2019 U.K. election. The International Journal of Press/Politics, 27(2), 460–477. https://doi.org/10.1177/1940161221994100
GloboNews. (n.d.). [Video post]. Facebook. https://www.facebook.com/GloboNews/videos/564943519738160/
Jingnan, H, Bond, S. & Allyn B. (2025, January 7). Meta says it will end fact-checking as Silicon Valley prepares for Trump. NPR. https://www.npr.org/2025/01/07/nx-s1-5251151/meta-fact-checking-mark-zuckerberg-trump
Kerr, D. (2025, January 13). Meta moderators were already in Texas before Zuckerberg announced move, say ex-workers. The Guardian. https://www.theguardian.com/technology/2025/jan/13/meta-moderators-texas-zuckerberg-trump
Lee, A. M. (2025, January 10). What is Community Notes, and how will it work on Facebook and Instagram? CBS News. https://www.cbsnews.com/news/what-is-community-notes-twitter-x-facebook-instagram/
Leingang, R. (2025, January 8). Meta’s fact checking partners brace for layoffs. The Guardian.https://www.theguardian.com/technology/2025/jan/08/meta-layoffs-factchecking-partners
Lobo, E. & Bolzan de Morais. J. L. (2021). New technologies, social media, and democracy. Opinión Jurídica, 20(41) . https://www.researchgate.net/publication/350028412_New_technologies_Social_Media_and_Democracy
Montellaro, Z. (2024, August 28). Trump, Zuckerberg and the election book: A new controversy. Politico. https://www.politico.com/news/2024/08/28/trump-zuckerberg-election-book-00176639
Nguyễn, S., Moran, R. E., Nguyen, T., & Bui, L. (2023). “We Never Really Talked About politics”: Race and Ethnicity as Foundational Forces Structuring Information Disorder Within the Vietnamese Diaspora. Political Communication, 40(4), 415–439. https://doi.org/10.1080/10584609.2023.2201940
Officer, J. K. C. G. A., & Meta. (2025, January 7). More speech and fewer mistakes. Meta. https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
Ortutay, B. (2024, October 31). Report says crowd-sourced fact checks on X fail to address flood of US election misinformation | AP News. AP News. https://apnews.com/article/x-musk-twitter-misinformation-ccdh-0fa4fec0f703369b93be248461e8005d
Partido Socialismo e Liberdade (PSOL). (2024). Erika Hilton (PSOL) aciona ONU contra Meta e Mark Zuckerberg por ameaças à população LGBT. https://psol50.org.br/erika-hilton-psol-aciona-onu-contra-meta-e-mark-zuckerberg-por-ameacas-a-populacao-lgbt/
Reuters. (2024, October 31). Musk’s X ineffective against surge of US election misinformation, report says. Reuters. https://www.reuters.com/world/us/musks-x-ineffective-against-surge-us-election-misinformation-report-says-2024-10-31/
Romo, V. (2025, January 10). Meta expands international fact-checking efforts amid misinformation concerns. NPR. https://www.npr.org/2025/01/10/nx-s1-5252738/meta-fact-checking-international
Santos, S. F. (2024, August 31). Musk’s X suspended in Brazil after disinformation row. https://www.bbc.com/news/articles/c5y3rnl5qv3o
The New York Times. (2025, January 7). Meta Says It Will End Its Fact-Checking Program on Social Media Posts. The New York Times. https://www.nytimes.com/live/2025/01/07/business/meta-fact-checking
United Nations. (n.d.). Information Integrity | United Nations. https://www.un.org/en/information-integrity
Valenzuela, S., Muñiz, C., & Santos, M. (2022). Social media and belief in misinformation in Mexico: a case of maximal panic, minimal effects? The International Journal of Press/Politics, 29(3), 667–688. https://doi.org/10.1177/19401612221088988
Wikipedia contributors. (2025, January 11). Nina da Hora. Wikipedia. https://en.wikipedia.org/wiki/Nina_da_Hora
Wikipedia contributors. (2025, January 28). Facebook–Cambridge Analytica data scandal. Wikipedia. https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
World Economic Forum. (2025, January). Global risks report 2025: Conflict, environment, and disinformation top threats. [Press release]. https://www.weforum.org/press/2025/01/global-risks-report-2025-conflict-environment-and-disinformation-top-threats/
X.com. (n.d.). X (Formerly Twitter). https://x.com/YouTubeLiaison/status/1803072740175597643?lang=en
X. (n.d.). Signing up to contribute to Community Notes. Community Notes. https://communitynotes.x.com/guide/en/contributing/signing-up