Big Tech Can’t Ban Its Way Out of This

Platforms are scrambling to avoid being used by right-wing extremists targeting the inauguration. But the seeds of this crisis were sown long ago. 
trump supporters
Many tributaries feed the “Stop the steal” river, including organized militias, QAnon adherents, and people who simply believe Trump’s claims that the country is being stolen from them and feel motivated to act.Photograph: Stephen Lam/Getty Images

Social media companies took a great deal of criticism for their role in enabling the January 6 breach of the Capitol. Now, with Joe Biden’s inauguration a few days away, and federal officials warning the risk of more violence is high, they’re scrambling to avoid being implicated in any additional attacks.

It’s hard to keep track of all the steps taken by tech companies over the past week and a half. (The nonprofit First Draft is heroically trying.) These companies include platforms that were directly implicated in the riot, as well as several that provide services like hosting and payment, and some that had no connection at all. To give a partial list: Facebook, Twitter, and YouTube have all suspended or banned Donald Trump. Facebook and YouTube are suspending users who continue to say the election was stolen. Twitter says it has purged some 70,000 QAnon accounts. Facebook is blocking new events near locations like the Capitol, and along with Google has paused all political ads (again). Google also kicked the conservative-friendly Parler off its app store for failing to have a robust content moderation system. So did Apple, while Amazon canceled Parler’s hosting contract. Companies less accustomed to public scrutiny have also joined the crackdown. The streaming platform DLive banned users who had livestreamed themselves breaking into the Capitol. Telegram, a notoriously laissez-faire messaging and social media platform popular among far-right groups, announced that it had removed dozens of public channels because of “public calls to violence.” TikTok says it is blocking videos of Trump’s January 6 speech and hashtags associated with the attack. Zello, a walkie-talkie app used at the Capitol riot, says it has deleted more than 2,000 channels associated with white supremacists and right-wing militias. Even Peloton has seen it necessary to ban the #StopTheSteal tag within its app.

We should welcome these companies’ efforts to do their part to ward off further violence, even if there’s an element of self-preservation involved. We should also keep in mind that the platforms are only one part of the picture—they can’t make Trump or his most dead-end followers disappear. But to the extent that social media does bear responsibility for what happened at the Capitol, these hastily imposed emergency measures are almost beside the point. The online seeds of real-world violence were sown long before January 6.

If social media platforms do manage to make a difference in the coming week, it will most likely be by making it harder for extremist groups to organize and plan. Losing access to Facebook and Twitter makes it more difficult to spread the message to people who aren’t already seeking it out. On the other hand, there are many ways for would-be terrorists to communicate, and shutting down public groups and channels creates the risk that planning will simply relocate to private or encrypted spaces that are harder for law enforcement to monitor. The messaging app Signal, which uses end-to-end encryption by default, has seen a huge spike in downloads this past week.

“There has always been a pretty even debate,” said Steven Stalinksy, the executive director of the Middle East Media Research Institute and an expert on how extremists organize online. “Leave it up and you get intelligence value; shut it down and maybe it’s going to stop people from recruiting or planning.”

“Some of these people are not smart,” he added, referring to domestic extremists. “Especially when it comes to white supremacist groups. They were never under any scrutiny before. Some of them use their real names, or in their pictures there will be clues. They’re not taking security very seriously.”

Still, Stalinsky said he comes down on the “take it down” side when it comes to violent extremism. His perspective was forged by years of monitoring activity by Islamic terrorists. The Islamic State, in particular, became notorious for its strategic use of social media in the 2010s. “This might sound crazy, but if it were not for Twitter, ISIS would not have been ISIS,” he said. “They used it so effectively for recruitment for spreading their ideology, for growing.” After the 2014 murder of the American journalist James Foley, Stalinksy said, Twitter took the problem seriously and largely purged ISIS from its platform. Banished from major social media networks, the group migrated to Telegram and other chat apps.

Platforms have been criticized for years for treating white nationalism more leniently than Islamic extremism. To the extent that right-wing domestic terrorists use social media for recruitment, however, the last-minute moves announced in the past week are probably too late to have any impact on violence surrounding the inauguration. Recruitment, such as it is, has been going on for years. YouTube has been shown to make it easier for communities to form around radical right-wing viewpoints; Facebook’s recommendation algorithms have notoriously steered people into more extreme groups. It’s also tricky to analogize the Capitol rioters directly to ISIS. It’s an ad hoc alliance aimed at a particular, immediate goal—keeping Trump in office—rather than an ideological organization with fixed long-term ambitions. While some appear to belong to organized militias and white supremacist groups, many tributaries feed the “Stop the steal” river, including QAnon adherents, who are not inherently organized around violence, and people who simply believe Trump’s claims that the country is being stolen from them and feel motivated to act.

Indeed, providing a forum for lies about the election is probably the most important way in which social media platforms have contributed to the current atmosphere of political violence, and it’s also the one that is most obviously too late for any quick fix. Facebook and YouTube are shutting down accounts that repeat lies about a stolen election, but at this point tens of millions of Americans already believe those false claims. For the companies to have made a difference here, they would have had to start a lot earlier.

To be fair, in some ways they did start earlier. (Much less so YouTube, which tends to get away with being less aggressive about disinformation.) In the months leading up to and following the election, the companies made unprecedented efforts to steer users to accurate information and apply fact-checking labels to claims of electoral fraud. Those moves don’t seem to have been effective, but one can understand why the companies were hesitant to start taking down every post disputing the election results. It’s untenable for a platform of any real scale to police all false content—especially when it comes to politics, which is all about trying to convince voters to accept a certain version of reality. In an era of intense polarization, it isn’t always clear which lies will be the ones to spark violence until it happens.

It’s a mistake, however, to analyze social media’s culpability solely in terms of a binary decision to take something down or leave it up. The effect these companies have on discourse is much more deeply woven into their basic design, which prioritizes engagement above all else. To understand one way in which this plays out, I highly recommend a recent New York Times article by Stuart A. Thompson and Charlie Warzel. They analyzed public Facebook posts from three far-right users, including one who was part of the crowd outside the Capitol on January 6. All three, the authors found, started out posting normal stuff, to limited reaction. Once they shifted to extreme posts—whether it was encouraging “Stop the steal” protests, Covid denialism, or spreading false claims about rigged ballot counts—their engagement skyrocketed: more likes, more comments, more shares. More attention.

What’s so fascinating about this report, anecdotal though it may be, is that it shows people doing and saying extreme things that they otherwise wouldn’t have because of social media. One person profiled in the Times article explains that his attendance at the DC Trump rally has caused his family to disown him—but he has no plans to stop what he’s doing. Another gets arrested. Their stories suggest that the disinformation problem isn’t just about the incentive that platforms have to show users incendiary content; it’s also about the incentives the platforms create for users to create that content in the first place, in order to experience the addictive thrill of getting other people’s attention. That goes to the heart of how the companies make their money. It can’t be solved by banning certain groups or hashtags. Their executives occasionally recognize this—Twitter CEO Jack Dorsey tweeted on Wednesday, “We need to look at how our service might incentivize distraction and harm”—but they have yet to translate soul-searching into action.

It would of course be ludicrous to suggest that internet platforms are solely or even primarily responsible for the attack at the Capitol or law enforcement’s inadequate preparation for it. America has a long history of political violence, long before social media showed up, and people who want to do harm will always find ways to communicate. Fox News remains the country’s most influential source of right-wing disinformation. And when the most powerful person in the world wants to cause trouble, there’s a limit to what social networks can do to stop him.

And yet it’s hard to deny that social media has helped get us where we are. Fixing that will require much more than scrambling to purge the bad guys once blood has already been spilled.


More Great WIRED Stories