National

Facebook continues to autogenerate pages for Proud Boys, other extremist groups

Four years after Facebook first banned the Proud Boys, the social network is still creating new content on behalf of the far-right extremist group, as well as dozens of other white supremacists, militias and known terrorist organizations, in apparent violation of its own stated policies.

According to recent findings by the Tech Transparency Project (TTP), an industry watchdog, Facebook not only regularly allows users to display in their profiles names of extremist groups, including some that have been explicitly banned from the site, but then autogenerates pages for those groups, increasing their visibility and giving their supporters a place to connect.

These pages are the products of a long-standing Facebook feature that automatically generates a new page any time users list a job title, business or interest or location on their profile that doesn’t already have an official page of its own. The subjects of these pages are usually harmless: local coffee shops or record stores or activities like hiking.

But TTP has found that the same technology is also regularly used to create business or interest pages for a wide variety of groups that Facebook itself has deemed dangerous, raising concerns about the platform’s ability to enforce its own policies against violent or hateful content.

One particularly startling example of this is an unofficial interest page for "Proud Boys USA" that was created on Jan. 6, 2021, on the day that a violent mob, allegedly including several members of the Proud Boys, stormed the U.S. Capitol, temporarily delaying the certification of the 2020 election.

Facebook first announced an official ban on the Proud Boys in 2018, after a violent incident involving members of the far-right group in New York City. Since then, it has announced sweeping efforts to remove hundreds of accounts and pages associated with the group, whose leaders have been charged with seditious conspiracy in connection with the Jan. 6 insurrection, including as recently as this August.

And yet the autogenerated Proud Boys page, which was first identified last month by TTP, remained active for more than a year and a half. Facebook only removed the page — and others identified by TTP — after they were flagged by Yahoo News in advance of this story’s publication.

“Are Facebook's detection systems really as good as they claim, if their own systems where these explicit terms and groups are banned are actually creating pages for them?” asked TTP’s director, Katie Paul. “How can we trust what they say about safety and security when Facebook doesn't just fail to remove, but actually creates extremist content?”

Over the years, Facebook has faced scrutiny over the way its platform has been used by terrorists, hate groups and other violent actors to organize and promote harmful content. In response, it has announced a number of efforts to crack down on an ever-expanding list of dangerous individuals and organizations, using a combination of advanced artificial intelligence and human moderators.

But extremism experts and whistleblowers have criticized these measures as ineffective, noting that blacklisted groups regularly fall through the cracks, often remaining active on the platform well after they've been banned.

An anonymous whistleblower first revealed that Facebook "actively promotes terror content across the website via its auto-generated features," in a 2019 petition to the Securities and Exchange Commission.

The whistleblower petition identified dozens of autogenerated pages for white supremacist groups as well as designated foreign terrorist organizations like ISIS and al-Qaida that, it declared, appeared “to be assisting individuals who profess sympathy for extremist groups in finding and networking with one another.”

Since then, TTP has produced multiple reports on the autogenerated extremist pages, which Facebook has continued to produce despite increased scrutiny over the platform's role in facilitating the spread of COVID-19 misinformation and election conspiracy theories, and permitting organizing by groups that stormed the Capitol on Jan. 6.

Among others, TTP has found that Facebook continues to autogenerate pages for a number of far-right paramilitary groups whose members have been charged in connection with the insurrection, including the Oath Keepers and Three Percenters. The FBI has accused members of the Oath Keepers — including one of the group's leaders currently on trial for seditious conspiracy — of using Facebook Messenger to track down members of Congress while the Capitol was under siege.

In August, TTP published the findings of a study in which it conducted searches for the names of 226 white supremacist groups that have been labeled hate groups by the Anti-Defamation League, the Southern Poverty Law Center and Facebook itself. The study found that more than a third of those groups had a presence on Facebook, and that they were associated with a total of 119 pages — 24 of which were autogenerated by Facebook itself.

The same study also found that Facebook’s algorithm often recommended other extremist content to users who visited the white supremacist pages, and that searches for these groups were often monetized.

Erica Sackin, who heads up communications for Dangerous Organizations and Individuals at Facebook’s parent company, Meta, declined to comment on the record for this story. Instead, Sackin directed Yahoo News to a statement the company had previously issued in response to the TTP’s August report, which stated: “We immediately resolved an issue where ads were appearing even if a user searched for terms related to banned organizations and we are also working to fix an auto generation issue, which incorrectly impacted a small number of pages.

“We will continue to work with outside experts and organizations in an effort to stay ahead of violent, hateful, and terrorism-related content and remove such content from our platforms,” the statement read.

So far, however, Facebook’s primary response to the autogeneration issue appears to be removing individual pages once they’ve garnered media attention.

Domestic extremists aren't the only ones benefiting from this problem. Researchers have repeatedly found that Facebook also generates pages on behalf of Islamist terrorists, including those that have been explicitly banned from the site under its policy against entities and individuals designated by the U.S. government as Foreign Terrorist Organizations.

“There are a number of different networks that spread islamist terrorist content, whether it’s al-Qaida or Islamic State, in a number of different languages across Facebook at a given time,” said Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks extremist activity online.

Ayad told Yahoo News that a popular current strategy used by these networks is to hack into the accounts of existing Facebook users and update their public profiles to say things like “I work for the Islamic State,” prompting Facebook to auto-generate a business page for the designated terrorist group.

“This is the platform itself creating, essentially, meeting spots, like locus points for terrorist supporters to gather… [and] enabling terrorists to spread content and narratives farther,” Ayad said. “In terms of being able to confront the issue of online terrorist [activity], that’s a big problem.”

Facebook executives — including CEO Mark Zuckerberg — have repeatedly dodged questions from lawmakers about its practice of autogenerating pages for groups involved in real-world violence.

Most recently, the issue was raised by Sen. Gary Peters, D-Mich., at a hearing last month on social media's impact on homeland security. Peters, who serves as chairman of the Senate Homeland Security and Governmental Affairs Committee, asked Meta's chief product officer, Chris Cox, to explain "why, after several years of warnings by external organizations," Facebook continues to "automatically generate homepages for white supremacists and other extremist and terrorist groups such as ISIS."

“Doesn’t this feature allow extremist groups to basically recruit members more easily because you’re putting this up?” Peters asked. Specifically, he pointed to a Facebook page for the Aryan Brotherhood, which had been automatically generated and allowed to remain active on the platform for 12 years before it was eventually taken down in August 2022.

Initially, Cox insisted that Facebook “wouldn't have put this page up ourselves,” but when Peters reiterated that the page had been autogenerated, Cox said he wasn’t familiar “with this specific example.”

“Despite touting investments in trust and safety, social media platforms are still amplifying extremist and dangerous content — including content related to white nationalist and anti-government ideologies — and it’s clear they lack the right incentives and safeguards to stop it,” Peters said in a statement to Yahoo News.

Siri Nelson, executive director of the National Whistleblower Center, which worked with the anonymous whistleblower who first revealed that Facebook was autogenerating extremist content, speculated that it’s not in Facebook’s financial interest to address this problem.

“It costs money to effectively control such a massive operation,” Nelson said. She suggested that auto-generated pages also likely help Facebook sell targeted ads.

“Despite Facebook's misleading rhetoric about social responsibility and community building, they are at its core a capitalist endeavor that has no moral values and will do anything to make money, including fostering hate and facilitating crime,” said Nelson. “And as long as they can get away with it and make the money that they're aiming to make, they're going to continue to do it.”

For years, Facebook and other social media companies have managed to avoid accountability for harmful activity on their sites, thanks to a provision of the 1996 Communications Decency Act known as Section 230, which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The controversial provision has been interpreted as broadly immunizing websites, including social media platforms, from being held legally responsible for content posted by their users. But some, including the National Whistleblowers Center, question whether Section 230’s protections extend to content — such as the autogenerated pages — that Facebook itself is creating.

“This isn't just passive content that's being posted on a platform and algorithmically amplified, this is Facebook actually making those pages,” said the TTP’s Paul.

The broad immunity enjoyed by social media platforms will soon face its greatest threat yet. The Supreme Court said Monday that it will take up two cases this term that challenge the scope of Section 230 and could determine whether companies can be sued for hosting and recommending terrorist content.