Social Media Are a Mass Shooter’s Best Friend

A terrorist attack in New Zealand cast new blame on how technology platforms police content. But global internet services were designed to work this way, and there might be no escape from their grip.

Associated Press

Forty-nine people are dead and 20 more injured after terrorist attacks on two New Zealand mosques Friday. One of the alleged shooters is a white man who appears to have announced the attack on the anonymous-troll message board 8chan. There, he posted images of the weapons days before the attack, and an announcement an hour before. On 8chan and Twitter, he also posted links to a 74-page manifesto, titled “The Great Replacement,” blaming immigration for the displacement of whites in Oceania and elsewhere. The manifesto cites “white genocide” as a motive for the attack, and calls for “a future for white children” as its goal.

The person who wrote the manifesto, identified by authorities as a 28-year-old Australian named Brenton Tarrant, also live-streamed one of the attacks on Facebook; Tarrant appears to have posted a link to the stream on 8chan before carrying out the attack.

It’s terrifying stuff, especially since 8chan is one of a handful of sites where disaffected internet misfits create memes and other messages to provoke dismay and sow chaos among the “normies” outside their ranks, whom they often see as suckers at best, oppressors at worst. “It’s time to stop shitposting,” the alleged shooter’s 8chan post reads, “and time to make a real-life effort post.” Many of the responses, anonymous by 8chan’s nature, celebrate the attack, with some posting congratulatory Nazi memes. A few seem to decry it, even if just for logistical quibbles. Still others lament that the whole affair might destroy the site, a concern that betrays its users’ priorities.

Social-media companies scrambled to take action as the news—and the video—of the attack spread. Facebook finally managed to pull down Tarrant’s profiles and the video, but only after New Zealand police brought the live-stream to the company’s attention. Twitter also suspended Tarrant’s account, where he had posted links to the manifesto from several file-sharing sites.

The chaotic aftermath mostly took place while many North Americans slept unaware, waking up to the news and its associated confusion. By morning on the East Coast, news outlets had already weighed in on whether technology companies might be partly to blame for catastrophes such as the New Zealand massacre because they have failed to catch offensive content before it spreads. But the internet was designed to resist the efforts of any central authority to control its content—even when a few large, wealthy companies control the channels by which most users access information.


“Tech companies basically don’t see this as a priority,” the counter-extremism policy adviser Lucinda Creighton told CNN. “They say this is terrible, but what they’re not doing is preventing this from reappearing.” Others affirmed the importance of quelling the spread of the manifesto, video, and related materials, for fear of producing copycats, or of at least furthering radicalization among those who would be receptive to the message. “Do not share the video or you are part of this,” said a retired FBI agent who now works as an analyst for CNN.

That might be impossible. When I started catching up on the shooting this morning, I stumbled upon the video of the massacre searching for news. I didn’t intend to watch it, but it autoplayed in my Twitter search results, and I couldn’t look away until it was too late. I wish I’d never seen it, but I didn’t even get a chance to ponder that choice before Twitter forced it upon me. The internet is a Pandora’s box that never had a lid.

The circulation of ideas might have motivated the shooter as much as, or even more than, ethnic violence. As Charlie Warzel wrote at The New York Times, the New Zealand massacre seems to have been made to go viral. Tarrant teased his intentions and preparations on 8chan. When the time came to carry out the act, he provided a trove of resources for his anonymous brethren, scattered to the winds of mirror sites and repositories. Once the live-stream started, one 8chan user posted “capped for posterity” on Tarrant’s thread, meaning that he had downloaded the stream’s video for archival and, presumably, future upload to other services, such as Reddit or 4chan, where other like-minded trolls or radicals would ensure the images spread even further. As Warzel put it, “Platforms like Facebook, Twitter, and YouTube … were no match for the speed of their users.”

Defending himself and Facebook before Congress last year against myriad failures, including allowing Russian operatives to disrupt American elections and permitting illegal housing ads that discriminate by race, Mark Zuckerberg repeatedly invoked artificial intelligence as a solution for the problems his and other global internet companies have created. There’s just too much content for human moderators to process, even when pressed hard to do so under poor working conditions. The answer, Zuckerberg has argued, is to train AI to do the work for them.

But that technique has proved insufficient. For one part, that’s because AI is an aspirational solution for a future that has not arrived. It gives Zuckerberg and others rhetorical cover more than technological outcomes. But for another, detecting and scrubbing undesirable content automatically is extremely difficult. False positives enrage earnest users or foment conspiracy theories among paranoid ones, thanks to the black-box nature of computer systems. Worse, given a pool of billions of users, the clever ones will always find ways to trick any computer system, for example, by slightly modifying images or videos in order to make them appear different to the computer but identical to human eyes. 8chan, as it happens, is largely populated by computer-savvy people who have self-organized to perpetrate exactly those kinds of tricks.

The primary sources are only part of the problem, too. Long after the deed, YouTube has bolstered conspiracy theories about murders, successfully replacing truth with lies among broad populations of users who might not even know they are being deceived. Even stock-photo providers are licensing stills from the New Zealand shooter’s video; a Reuters image that shows the perpetrator wielding his rifle as he enters the mosque is simply credited, “Social media.”


The video is just the tip of the iceberg. Many smaller and less obviously inflamed messages have no hope of being found, isolated, and removed by technology services. The shooter praised Donald Trump as a “symbol of renewed white identity” and incited the conservative commentator Candace Owens, who took the bait on Twitter in a post that got retweeted thousands of times by the morning after the attack. The shooter’s forum posts and video are littered with memes and inside references that bear special meaning within certain communities on 8chan, 4chan, Reddit, and other corners of the internet, offering tempting receptors for consumption and further spread.

Perhaps worst of all, the forum posts, the manifesto, and even the shooting itself might not have been carried out with the purpose that a literal read of their contents suggests. On first blush, it seems impossible to deny that this terrorist act was motivated by white-supremacist hatred, an animosity that authorities like the FBI expert and the Facebook officials would want to snuff out before it spreads. But 8chan is notorious for an ironic, above-it-all approach to all of its perversities, a squalor amplified by the anonymity intrinsic to the service. Here, where users post “for the lulz,” or just to get a rise out of those who aren’t in the know, the ideology embraces chaos before it does zealotry. But the internet separates images from context and action from intention, and then it spreads those messages quickly among billions of people scattered all around the globe.

That structure makes it impossible to even know what individuals like Tarrant “really mean” by their words and actions. As it spreads, social-media content neuters earnest purpose entirely, putting it on the same level as anarchic randomness. What a message means collapses into how it gets used and interpreted. For 8chan trolls, any ideology might be as good as any other, so long as it produces chaos. No one can find safe harbor from this upheaval. Even here, I am forced to tiptoe around the question of what truly motivated Tarrant and his apparent accomplices. In the process, I risk playing into the hands of conspiracy theorists and trollish contrapuntists on Twitter, YouTube, Facebook, Reddit, and all the rest, underplaying white-supremacist violence by casting it as an epiphenomenon of disgruntled internet culture. There is no winning at this game. The Atlantic originally illustrated this story with the stock-image still from the live-stream video, only to change it after internet users found it offensive. Did we do the right thing, or simply perpetuate the disquiet 8chan hoped for? The answer is unknowable.

It’s easy to say that technology companies can do better. They can, and should. But ultimately, that’s not the problem. The problem is the media ecosystem they have created. The only surprise is that anyone would still be surprised that social media produce this tragic abyss, for this is what social media are supposed to do, what they were designed to do: spread the images and messages that accelerate interest, without check, and absent concern for their consequences. It’s worth remembering that “viral” spread once referred to contagious disease, not to images and ideas. As long as technology platforms drive the spread of global information, they can’t help but carry it like a plague.

Ian Bogost is a contributing writer at The Atlantic.