Docs: Facebook Knew Its Algorithm Pushed Extremism
Originally posted by Adam_PoE
It is a who’s who of the biggest players on the Internet, including Google, YouTube, Twitter, Facebook, Reddit, Snap, Twitch, Telegram and TikTok. On the list are a number of smaller, pro-Trump sites that have sprung up in recent years, including Gab and Parler, as well as known cesspools 4chan and 8kun (formerly 8chan).Specifically, the Select Committee wants records relating to the spread of misinformation, efforts to overthrow the results of the 2020 election, efforts to prevent certification of the election, foreign influence attempt in the election, and domestic violent extremism.
Additionally, the Committee is also looking for materials from these companies relating to any policy changes that were considered or adopted to address misinformation, violent extremism, and foreign malicious influence.
In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith's account indicated an interest in politics, parenting, and Christianity; and followed a few of her favorite brands, including Fox News and then-President Donald Trump.
Though Smith had never expressed interest in conspiracy theories, in just two days, Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.
Smith was not a real person. A researcher employed by Facebook invented the account, along with those of other fictitious "test users" in 2019 and 2020, as part of an experiment in studying the platform's role in misinforming and polarizing users through its recommendations systems.
Internal documents show that Facebook has declined to deploy mitigation tactics when chief executive Mark Zuckerberg has objected on the basis that they will cause too many "false positives" or might stop people from engaging with its platforms.
The documents report, for example, that Facebook research, based on data from 2019, found that misinformation shared by politicians was more damaging than that coming from ordinary users. Yet the company maintained a policy that year that explicitly allowed political leaders to lie without facing the possibility of fact checks.
Company research also revealed in an undated document that XCheck—the "cross-check" program created to prevent "P.R. fires" by imposing an extra layer of oversight when the accounts of politicians and other users with large followings faced enforcement action—had devolved into a widely-abused "white list" that effectively placed the powerful largely beyond the reach of company policies.