clock menu more-arrow no yes mobile

Filed under:

Facebook nears a tipping point when it comes to moderating hate speech

The social media giant just banned dozens of “boogaloo” extremist accounts.

AFP via Getty Images
Shirin Ghaffary is a senior Vox correspondent covering the social media industry. Previously, Ghaffary worked at BuzzFeed News, the San Francisco Chronicle, and TechCrunch.

On Tuesday afternoon, Facebook announced that it had removed more than 200 accounts linked to the violent, anti-government extremist “boogaloo” movement. This move comes after weeks of criticism over the company’s handling of hate speech on its platform. Still, banning the boogaloo accounts does not solve Facebook’s larger hate speech problem.

More than 100 major brands, from Unilever to Verizon, have pulled advertising from the platform after civil rights groups called for a boycott in the past week. Facebook’s efforts to address the controversy included the announcement of increased efforts to prevent voter suppression based on race and ethnicity and a potential audit of its moderation practices.

The controversy over social media companies and hate speech has intensified in recent weeks as protesters across the US have been fighting for greater racial justice. About a month ago, as protests were first breaking out, Facebook ignited outrage when it decided not to do anything after President Trump posted a comment saying “when the looting starts, the shooting starts” in a post about the protests. This enraged civil rights leaders, as well as some of Facebook’s employees. It also prompted the “Stop Hate for Profit” ad boycott, led by organizations such as the Anti-Defamation League and the NAACP. Three US senators joined the chorus on Tuesday, sending a letter to Facebook asking the company to more strictly enforce its rules on extremist content.

Facebook has long had a policy explicitly forbidding hate speech. The company says it removes 89 percent of hate speech on the platform before it gets reported and has argued that while there are always exceptions at its scale, overall, it’s doing a fine job. A recent report by the European Commission found that Facebook was faster than some of its competitors in addressing instances of hate speech.

“We do not profit from hate. We have no incentive to have hate on our platform,” Facebook VP Nick Clegg said in a Bloomberg appearance on Tuesday and reiterated in a CNN appearance.

Regardless of the company’s claims about policing hate speech, recent events are furthering the perception that Facebook simply isn’t doing enough. Specifically, critics have argued that the company is making exceptions for politicians like Trump and that flat-out violent groups like the boogaloo movement can continue to gain traction on the platform. They say that some of the company’s smaller competitors like Reddit and Twitter are more aggressively enforcing rules on hate speech, by banning or moderating accounts linked to President Trump and popular accounts that support him.

Facebook’s approach fits into an established playbook for the company. The reported audit, for instance, would be the third one the company has commissioned. The company has taken down harmful conspiracy networks on its platform, only to see them pop back up or new ones arise. Meanwhile, the ad boycotts likely won’t have a dire financial impact because the bulk of Facebook’s revenue comes from smaller brands (the top 100 advertisers accounted for little more than 6 percent of its revenue last year). But Facebook and other social media companies at least appear to be responding to this new source of pressure.

“What you’re seeing right now is people are leveraging various mechanisms — either economic or public relations — to push back on policies they don’t like,” Kate Klonick, an assistant professor at St. John’s University School of Law who studies social media and free speech, told Recode. “And you’re seeing the platforms give in.”

Twitter started a wave of social media companies coming down on Trump

Recent moves by Reddit, Snapchat, Twitch, and YouTube mark a decision by social media companies to start more strictly enforcing the rules around hate speech, weeks after protests around the police killing of George Floyd have caused a national reckoning over systemic racism in the United States.

In many ways, Twitter started this wave of action when, in late May, it added a warning label for glorifying violence to President Trump’s “shooting … looting” post. This represented a precedent-setting move for social media companies, which have been reluctant to moderate Trump, no matter how incendiary his rhetoric. (Twitter has spent the past two years refining its policies on moderating politicians’ speech.) Facebook then struck a nerve when it responded very differently to the same Trump post on its own platform. The company chose not to moderate the post, arguing that it wasn’t an incitememt of violence but an announcement of state use of force.

Now other platforms are joining in and following Twitter’s lead.

On Monday, Reddit banned r/The_Donald — a popular message board for Trump fans to share memes, videos, and messages — for consistently breaking its rules around harassment and hate speech. The same day, Twitch, a livestreaming company owned by Amazon, decided to temporarily suspend Trump’s account after it found some of its livestreams included “hateful conduct,” such as a rebroadcast of Trump’s kickoff rally where he said that Mexico was bringing rapists to the United States. Those moves follow Snapchat’s decision earlier in June to stop promoting Trump in its “Discover” section because his account had, in the company’s view, incited racial violence. And YouTube banned several high-profile far-right accounts, including those of white supremacist Richard Spencer and former KKK leader David Duke.

While there will be plenty more examples of hate speech on these platforms that likely go unaddressed, the spate of takedowns and bans could carry serious political consequences. They run against the stated free speech values of early internet forums like Reddit, which have historically tried to be as laissez-faire as possible in their approach to moderating content.

“I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency,” Reddit CEO Steve Huffman told reporters on Monday, according to The Verge, announcing the company’s decision to ban r/The_Donald.

Even Facebook has notably drawn some lines with Trump, taking down a Trump campaign ad featuring Nazi insignia and at least two other Trump-sponsored pieces of Facebook content in the past few months, including a Trump ad that tried to mislead people into filling out a fake census form and a post for copyright infringement. But the company is not reversing course on the president’s “looting ... shooting” post, and while it says it’s open to putting labels on political misinformation, it hasn’t yet done so with Trump.

There’s two-way political pressure on Facebook

Historically, Facebook and other social media companies have been cautious not to overly moderate content in the interest of appearing to protect free expression online. At the same time, President Trump and other Republican politicians have accused social media companies of having “anti-conservative” bias.

Trump has issued an executive order attempting to overturn Section 230, a landmark internet law that shields social media companies like Facebook from being sued over what people post on the platform. The rationale for overturning Section 230, according to Trump, is that Facebook is supposedly putting its thumb on the scales against Republican content — which is a largely unproven and, many argue, bad-faith claim.

That pressure puts Facebook in a bind. If it moderates popular conservative figures too much, even if those users post extremist or hateful content, that fuels Trump and other Republicans to argue that they’re being unfairly censored.

On the other hand, if Facebook doesn’t do a good enough job moderating white supremacist and other hateful content, Democrats, civil rights leaders, and major advertisers could continue to accuse the company of turning a blind eye to hate.

“I’m not going to pretend that we’re going to get rid of everything that people, you know, react negatively to,” said Facebook’s Clegg on CNN on Tuesday. “Politically, there are folks on the right who think we take down too much content, folks on the left who think we don’t take down enough.”

While all the major platforms have long had policies on hate speech, it has often taken major national events like mass shootings to pressure the companies to put more force around these issues. So we’ll see whether Facebook makes a meaningful change in how the company moderates content or whether it waits for the controversy to blow over, as it has in the past.

Ultimately, it looks as though Mark Zuckerberg will be the judge when it comes to drawing the line on Facebook restricting hate speech versus protecting free speech.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.