Racist and Violent Ideas Jump From Web’s Fringes to Mainstream Sites

On March 30, the young man accused of the mass shooting at a Tops grocery store in Buffalo surfed through a smorgasbord of racist and antisemitic websites online. On BitChute, a video sharing site known for hosting right-wing extremism, he listened to a lecture on the decline of the American middle class by a Finnish extremist. On YouTube he found a lurid video of a car driving through Black neighborhoods in Detroit.

Over the course of the week that followed, his online writing shows, he lingered in furtive chat rooms on Reddit and 4chan but also read articles on race in HuffPost and Medium. He watched local television news reports of gruesome crimes. He toggled between “documentaries” on extremist websites and gun tutorials on YouTube.

The young man, who was indicted by a grand jury last week, has been portrayed by the authorities and some media outlets as a troubled outcast who acted alone when he killed 10 Black people in the grocery store and wounded three more. In fact, he dwelled in numerous online communities where he and others consumed and shared racist and violent content.

As the number of mass shootings escalate, experts say many of the disturbing ideas that fuel the atrocities are no longer relegated to a handful of tricky-to-find dark corners of the web. More and more outlets, both fringe and mainstream, host bigoted content, often in the name of free speech. And the inability — or unwillingness — of online services to contain violent content threatens to draw more people toward hateful postings.

Many images and text that the young man had in his extensive writings, which included a diary and a 180-page “manifesto,” have circulated for years online. Often, they have infiltrated some of the world’s most popular sites, like Reddit and Twitter. His path to radicalization, illustrated in these documents, reveals the limits of the efforts by companies like Twitter and Google to moderate posts, images and videos that promote extremism and violence. Enough of that content remains that it can open a pipeline for users to find more extreme websites only a click or two away.

It’s quite prolific on the internet,” said Eric K. Ward, a senior fellow at the Southern Poverty Law Center who is also executive director at the Western States Center, a nonprofit research organization. “It’s not just going to fall in your lap; you have to start looking for it. But once you start looking for it, the problem is that it starts to rain down on a person in abundance.”

The Buffalo attack has renewed focus on the role that social media and other websites continue to play in acts of violent extremism, with criticism coming from the public as well as government officials.

“The fact that this act of barbarism, this execution of innocent human beings, could be livestreamed on social media platforms and not taken down within a second says to me that there is a responsibility out there,” Gov. Kathy Hochul of New York said after the shooting in Buffalo. Four days later the state’s attorney general, Letitia James, announced that she had begun an investigation into the role the platforms played.

Facebook pointed to its rules and policies that prohibit hateful content. In a statement, a spokeswoman said the platform detects over 96 percent of content tied to hate organizations before it is reported. Twitter declined to comment. Some of the social media posts on Facebook, Twitter and Reddit that The New York Times identified through reverse image searches were deleted; some of the accounts that shared the images were suspended.

The man charged in the killings, Payton Gendron, 18, detailed his attack on Discord, a chat app that emerged from the video game world in 2015, and streamed it live on Twitch, which Amazon owns. The company managed to take down his video within two minutes, but many of the sources of disinformation he cited remain online even now.

His paper trail provides a chilling glimpse into how he prepared a deadly assault online, culling tips on weaponry and tactics and finding inspiration in fellow racists and previous attacks that he largely mimicked with his own. Altogether, the content formed a twisted and racist view of reality. The gunman considered the ideas to be an alternative to mainstream views.

“How does one prevent a shooter like me you ask?” he wrote on Discord in April, more than a month before the shooting. “The only way is to prevent them from learning the truth.”

His writings map in detail the websites that motivated him. Much of the information he cobbled together in his writings involved links or images he had cherry-picked to match his racist views, reflecting the kind of online life he lived.

By his own account, the young man’s radicalization began not long after the start of the Covid-19 pandemic, when he was largely restricted to his home like millions of other Americans. He described getting his news mostly from Reddit before joining 4chan, the online message board. He followed topics on guns and the outdoors before finding another devoted to politics, ultimately settling in a place that allowed a toxic mélange of racist and extremist disinformation.

Although he frequented sites like 4chan known to be on the fringes, he also spent considerable time on mainstream sites, according to his own record, especially YouTube, where he found graphic scenes from police cameras and videos describing gun tips and tricks. As the day of the attack neared, the gunman watched more YouTube videos about mass shootings and police officers engaged in gunfights.

YouTube said it had reviewed all the videos that appeared in the diary. Three videos were removed because they linked to websites that violated YouTube’s firearms policy, which “prohibits content intended to instruct viewers how to make firearms, manufacture accessories that convert a firearm to automatic fire, or livestreaming content that shows someone handling a firearm,” according to Jack Malon, a YouTube spokesman.

At the center of the shooting, like others before it, was a false conviction that an international Jewish conspiracy intends to supplant white voters with immigrants who will gradually take over political power in America.

The conspiracy, known the “great replacement theory,” has roots reaching back at least to the czarist Russian antisemitic hoax called “The Protocols of the Elders of Zion,” which purported to be a Jewish plot to overtake Christianity in Europe.

It resurfaced more recently in the works of two French novelists, Jean Raspail and Renaud Camus, who, four decades apart, imagined waves of immigrants taking power in France. It was Mr. Camus, a socialist turned far-right populist, who popularized the term “le grand remplacement” in a novel by that name in 2011.

Mr. Gendron, according to the documents he posted, seemed to have read none of those; instead he attributed the “great replacement” notion to the online writings posted by the gunman who murdered 51 Muslims at two mosques in Christchurch, New Zealand, in 2019.

After that attack, New Zealand’s prime minister, Jacinda Ardern, spearheaded an international pact, called the Christchurch Call, that saw government and major tech companies commit to eliminate terrorist and extremist content online. Though the agreement carried no legal penalties, the Trump administration refused to sign, citing the principle of free speech.

Mr. Gendron’s experience online shows that the writings and video clips associated with the Christchurch shooting remain available to inspire other acts of racially motivated violence. He referred to both repeatedly.

The Anti-Defamation League warned last year that the “great replacement” had moved from the fringes of white supremacist beliefs toward the mainstream, pointing to the chants of protesters at the 2017 “Unite the Right” rally in Charlottesville, Va., that erupted in violence and the commentaries of Tucker Carlson on Fox News.

“Most of us don’t know the original story,” Mr. Ward of the Southern Poverty Law Center said. “What we know is the narrative, and the narrative of the great replacement theory has been credentialized by elected officials and personalities to such an extent that the origins of the story no longer need to be told. People are beginning to just understand it as if they might understand conventional wisdom. And that’s what is frightening.”

For all the efforts some major social media platforms have made to moderate content online, the algorithms they use — often meant to show users posts that they will read, watch and click — can accelerate the spread of disinformation and other harmful content

Media Matters for America, a liberal-leaning nonprofit, said last month that its researchers found at least 50 ads on Facebook over the last two years promoting aspects of the “great replacement” and related themes. Many of the ads came from candidates for political office, even though the company, now known as Meta, announced in 2019 that it would bar white nationalist and white separatist content from Facebook and Instagram.

The organization’s researchers also found that 907 posts on the same themes on right-wing sites drew more 1.5 million engagements, far more than posts intended to debunk them.

Although Mr. Gendron’s video of the shooting was removed from Twitch, it resurfaced on 4chan, even while he was still at the scene of the crime. The video has since spread to other fringe platforms like Gab and ultimately mainstream platforms like Twitter, Reddit and Facebook.

The advent of social media has in a fairly short period of time enabled nefarious ideas and conspiracies that once simmered in relative isolation to proliferate through society, bringing together people animated by hate, said Angelo Carusone, the president of Media Matters for America.

“They’re not isolated anymore,” he said. “They’ve been connected.”

Back to top button