Advertisement

What Facebook knew about its Latino-aimed disinformation problem

Lily Qian / For The Times

Share

It was October 2020, election conspiracy theories threatened to pull America apart at its seams, and Jessica González was trying to get one of the most powerful companies in the world to listen to her.

It wasn’t going well.

After months of trying to get on their calendar, González — the co-chief executive of media advocacy group Free Press — had finally managed to secure a meeting with some of the Facebook employees responsible for enforcing the social platform’s community standards. The issue at hand: the spread of viral misinformation among Latino and Spanish-speaking Facebook users.

Across the country, a pipeline of misleading media had been pumping lies and half-truths, in both English and Spanish, into local Latino communities. Sometimes the misinformation mirrored what the rest of the country was seeing: fear-mongering about mail-in ballots and antifa vigilantes, or conspiracy theories about the deep state and COVID-19. Other times it leaned into more Latino-specific concerns, such as comparing candidate Joe Biden to Latin American dictators or claiming that Black Lives Matter activists were using brujería — that is, witchcraft.

A Facebook logo in Paris.
A Facebook logo in Paris in 2017.
(Thibault Camus / Associated Press)

Much of the fake news was spreading on social media, via YouTube, Twitter and, pivotally, Facebook, WhatsApp and Instagram. All three are owned by the same umbrella company, which recently rebranded as Meta.

“The same sort of themes that were showing up in English were also showing up in Spanish,” González recalled. “But in English, they were either getting flagged or taken down altogether, and in Spanish they were being left up; or if they were getting taken down, it was taking days and days to take them down.”

Advertisement

In Latino households and communities, disinformation about the presidential election is rampant. Many are trying to combat it online and in person.

Oct. 26, 2020

Free Press had briefly flagged the problem in July 2020 during a meeting with Chief Executive Mark Zuckerberg. González had spent the months since trying to set up another, more focused conversation. Now, that was actually happening.

In attendance were Facebook’s public policy director for counterterrorism and dangerous organization, its global director for risk and response, and several members of the company’s policy team, according to notes from the meeting reviewed by The Times.

Yet the talk didn’t go as González had hoped.

Facebook CEO Mark Zuckerberg announces the company's new name, Meta, during a virtual event in October.
Seen on the screen of a device in Sausalito, Calif., Facebook CEO Mark Zuckerberg announces the company’s new name, Meta, during a virtual event on Oct. 28.
(Eric Risberg / Associated Press)

“We had a lot of specific questions that they completely failed to answer,” she said. “For instance, we asked them, who’s in charge of ensuring the integrity of content moderation in Spanish? They would not tell us the answer to that, or even if that person existed. We asked, how many content moderators do you have in Spanish? They refused to [answer] that question. How many people that moderate content in Spanish are based in the U.S.? ... No answer.”

“We were consistently met much the same way they meet other groups that are working on disinformation or hate speech,” she added: “With a bunch of empty promises and a lack of detail.”

Free Press wasn’t alone in finding Facebook to be a less than ideal partner in the fight against Spanish-language and Latino-centric misinformation. Days after the election, it and almost 20 other advocacy groups — many of them Latino-centric — sent a letter to Zuckerberg criticizing his company’s “inaction and enablement of the targeting, manipulation, and disenfranchisement of Latinx users” during the election, despite “repeated efforts” by the signatories to alert him of their concerns.

Advertisement

“Facebook has not been transparent at all,” said Jacobo Licona, a disinformation researcher at the Latino voter engagement group Equis Labs. Moreover, he said, it “has not been cooperative with lawmakers or Latinx-serving organizations” working on disinformation.

But inside Facebook, employees had been raising red flags of their own for months, calling for a more robust corporate response to the misinformation campaigns their company was facilitating.

That’s a through-line in a trove of corporate reports, memos and chat logs recently made public by whistleblower and former Facebook employee Frances Haugen.

Former Facebook employee Frances Haugen speaks during a hearing on Capitol Hill in October.
Former Facebook employee Frances Haugen speaks during a hearing of the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, on Capitol Hill on Oct. 5 in Washington.
(Alex Brandon / Associated Press)

“We’re not good at detecting misinfo in Spanish or lots of other media types,” reads one such document, a product risk assessment from February 2020, included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Haugen’s legal counsel. A consortium of news organizations, including the Los Angeles Times, obtained the redacted versions received by Congress.

The same document later adds, “We will still have gaps in detection & enforcement, esp. for Spanish.”

Advertisement

Vaccination rates are especially low among Latino men. Misinformation, fear and busy schedules all play a role, experts say.

May 19, 2021

The next month, another internal report warned that Facebook had “no policies to protect against targeted suppression (e.g., ICE at polls),” alluding to concerns that Latino voters would be dissuaded from showing up to vote if they were told, falsely, that immigration authorities would be present at polling sites.

The report color-coded that concern bright red: high risk, low readiness.

Later, in an assessment of the company’s ability to handle viral misinformation, the report added: “Gaps in detection still exist (e.g. various media types, Spanish posts, etc.)”

A third internal report pointed to racial groups with low historical voter participation rates as one of the main subsets of Facebook users facing an elevated risk from voter disenfranchisement efforts. Latinos are among those groups.

These concerns would prove prescient as the election drew closer.

“Disinformation targeting Latinos in English and Spanish was happening across the country, especially in places with higher populations of Latinos,” including California, Texas, Florida, New York and Arizona, said Licona, the disinformation researcher. “Facebook was — and still is — a major player.”

Company spokesperson Kevin McAlister told The Times that Facebook took “a number of steps” ahead of the 2020 election to combat Spanish-language misinformation.

“We built a Spanish version of our Voting Information Center where people could find accurate information about the election, expanded our voter interference policies and enforced them in Spanish and added two new U.S. fact-checking partners who review content in Spanish on Facebook and Instagram,” McAlister said. “We invested in internal research to help teams proactively identify where we could improve our products and policies ahead of the U.S. 2020 elections.”

Advertisement

Other broader measures announced at the time included not accepting any new political ads in the week before election day and removing misinformation about polling conditions in the three days before election day.

By election day, the company reported having removed more than 265,000 Facebook and Instagram posts which violated its voter interference policies, and added warning labels to more than 180 million instances of fact-checked misinformation.

In a June 2020 post on his personal Facebook page, Zuckerberg promised to “ban posts that make false claims saying ICE agents are checking for immigration papers at polling places, which is a tactic used to discourage voting.”

The company also said that four of its 10 fact-checking partners in the U.S. handle Spanish-language content.

Yet the problems facing Latinos on Facebook, WhatsApp and Instagram extend beyond any one election cycle, Haugen’s leaks reveal.

Facebook whistleblower Frances Haugen leaves after giving evidence at the Houses of Parliament in London.
Facebook whistleblower Frances Haugen leaves after giving evidence to the joint committee for the Draft Online Safety Bill, as part of British government plans for social media regulation, at the Houses of Parliament, in London on Oct. 25.
(Matt Dunham / Associated Press)
Advertisement

In 2019, Facebook published a study internally looking at efforts to discourage people from participating in the U.S. census, and how users perceived the company’s response to those efforts.

Among the posts that users reported to Facebook were ones “telling Hispanic[s] to not fill out the form”; “telling Hispanics not to participate in answering questions about citizenship”; saying that people “would be in danger of being deported if they participated”; implying the government would “get” immigrants who participated; and “discouraging ethnic groups” from participating.

Facebook’s researchers have also examined the possibility that the abundance of anti-immigrant rhetoric on the site takes an outsized toll on Latino users’ mental well-being.

While discussing one study with colleagues on an internal message board, a researcher commented: “We did want to assess if vulnerable populations were affected differently, so we compared how Latinx [users] felt in comparison with the rest of the participants, given the exposure to anti-immigration hateful rhetoric. We found that they expressed higher levels of disappointment and anger, especially after seeing violating content.”

In other message boards, employees worried that the company’s products might be contributing to broader racial inequities.

“While we presumably don’t have any policies designed to disadvantage minorities, we definitely have policies/practices and emergent behavior that does,” wrote one employee in a forum called Integrity Ideas to Fight Racial Injustice. “We should comprehensively study how our decisions and how the mechanics of social media do or do not support minority communities.”

Advertisement

Another post in the same racial justice group encouraged the company to become more transparent about XCheck, a program designed to give prominent Facebook users higher-quality content moderation which, in practice, exempted many from following the rules. “XCheck is our technical implementation of a double standard,” the employee wrote.

(Aside from a few upper-level managers and executives, individual Facebook employees’ names were redacted from the documents given to The Times.)

No, a candidate in Mexico’s upcoming presidential election did not pose nude with a drag queen.

April 15, 2018

As these internal messages suggest, Facebook — a massive company with tens of thousands of employees — is not a monolith. The leaked documents reveal substantial disagreement among staff about all sorts of issues plaguing the firm, with misinformation prominent among them.

The 2020 product risk assessment indicates one such area of dissent. After noting that Spanish-language misinformation detection remains “very low-performance,” the report offers this recommendation: “Just keep trying to improve. Addition of resources will not help.”

Not everyone was satisfied with that answer.

“For misinfo this doesn’t seem right … curious why we’re saying addition of resources will not help?,” one employee asked in a comment. “My understanding is we have 1 part time [software engineer] dedicated on [Instagram] detection right now.”

A second comment added that targeted misinformation “is a big gap. … Flagging that we have zero resources available right now to support any work that may be needed here.” (Redactions make it impossible to tell whether the same employee was behind both comments.)

Advertisement

In communications with the outside world, including lawmakers, the company has stressed the strength of its Spanish-language content moderation rather than the concerns raised by its own employees.

“We conduct Spanish-language content review 24 hours per day at multiple global sites,” the company wrote in May in a statement to Congress. “Spanish is one of the most common languages used on our platforms and is also one of the highest-resourced languages when it comes to content review.”

Mark Zuckerberg, chairman and CEO of Facebook.
Mark Zuckerberg, chairman and chief executive of Facebook, speaks at the CEO summit during the annual Asia Pacific Economic Cooperation (APEC) forum in Lima, Peru, in 2016.
(Andrew Harnik / Associated Press)

Two months later, nearly 30 senators and congressional members sent a letter to the company expressing concern that its content moderation protocols were still failing to stanch the flow of Spanish-language misinformation.

“We urge you to release specific and clear data demonstrating the resources you currently devote to protect non-English speakers from misinformation, disinformation, and illegal content on your platforms,” the group told Zuckerberg, as well as his counterparts at YouTube, Twitter and Nextdoor.

Zuckerberg’s response, which again emphasized the resources and manpower the company was pouring into non-English content moderation, left them underwhelmed.

Advertisement

“We received a response from Facebook, and it was really more of the same — no concrete, direct answers to any of our questions,” said a spokesperson for Rep. Tony Cárdenas (D-Pacoima), one of the lead signatories on the letter.

In a subsequent interview with The Times, Cárdenas himself said that he considered his relationship with Facebook “basically valueless.” During congressional hearings, Zuckerberg has “kept trying to give this image that they’re doing everything that they can: they’re making tremendous strides; all that they can do, they are doing; the investments that they’re making are profound and large and appropriate.”

“But when you go through his answers, they were very light on details,” Cárdenas added. “They were more aspirational, and slightly apologetic, but not factual at all.”

It’s a common sentiment on Capitol Hill.

“Online platforms aren’t doing enough to stop” digital misinformation, Sen. Amy Klobuchar (D-Minn.) said in a statement, and “when it comes to non-English misinformation, their track record is even worse. ... You can still find Spanish-language Facebook posts from November 2020 that promote election lies with no warning labels.”

Rep. Alexandria Ocasio-Cortez (D-N.Y.) listens to House Speaker Nancy Pelosi (D-San Francisco) in June.
Rep. Alexandria Ocasio-Cortez (D-N.Y.) listens as House Speaker Nancy Pelosi (D-San Francisco) speaks during a news conference on June 16 at the Capitol in Washington.
(J. Scott Applewhite / Associated Press)

“I’ve said it before and I’m saying it again: Spanish-language misinformation campaigns are absolutely exploding on social media platforms like Facebook, WhatsApp, etc.,” Rep. Alexandria Ocasio-Cortez (D-N.Y.) said in a recent tweet. “It’s putting US English misinfo campaigns to shame.”

Advertisement

Silicon Valley long had a keep-it-in-the-family ethos. But recent episodes at Facebook and Netflix suggest employees seeking change from the inside face daunting obstacles — unless they’re willing to go public.

Oct. 27, 2021

Latino advocacy groups, too, have been critical. UnidosUS (formerly the National Council of La Raza) recently cut ties with Facebook, returning a grant from the company out of frustration with “the role that the platform has played in intentionally perpetuating products and policies that harm the Latino community.”

Yet for all the concern from within — and criticism from outside — Spanish is a relatively well-supported language — by Facebook standards.

One leaked memo from 2021 breaks down different countries by “coverage,” a metric Facebook uses to track how much of the content users see is in a language supported by the company’s “civic classifier” (an AI tool responsible for flagging political content for human review). Per that report, the only Latin American country which has less than 75% coverage is non-Spanish-speaking Haiti. The U.S., for its part, has 99.45% coverage.

And a report on the company’s 2020 expenses indicates that after English, the second-highest number of hours spent on work related to measuring and labeling hate speech went toward Spanish-language content.

Indeed, many of the disclosures which have come out of Haugen’s leaks have focused on coverage gaps in other, less-well-resourced languages, especially in the Middle East and Asia.

But to those seeking to better protect Latinos from targeted disinformation, Facebook’s assertions of sufficient resources — and the concerns voiced by its own employees — raise the question of why it isn’t doing better.

Advertisement

“They always say, ‘We hear you, we’re working on this, we’re trying to get better,’” said González. “And then they just don’t do anything.”

Advertisement