How Sidechat Fanned the Flames of University Campus Protests

Amid tensions about free speech on university campuses and the ongoing Israel-Hamas war, anonymous social media app Sidechat has become a hotbed of vile rhetoric.
Broken pink and blue chat bubbles on a dark blue background
Photograph: MirageC/Getty Images

In the months following Hamas’ October 7 attack on Israel, conversation on college campuses has been defined by a palpable tension. Increased antisemitic and anti-Muslim rhetoric embroiled numerous universities in free-speech debates. In late April, as the Israel-Hamas War moved into its fifth month, students at Columbia University and other institutions across the US began protesting, calling for a ceasefire. Amid all of this, one platform has served as a locus: Sidechat, a social media app that’s become both a place for dialog about the protests and a breeding ground for hate speech.

Over the past few weeks, as demonstrations erupted at Columbia, NYU, Yale, Princeton, the University of Texas, and elsewhere, students took to the app to share memes and express dismay at their administrators’ responses.

On April 22, following a weekend of arrests at Columbia, Colin Roedl, editorial page editor at the student-run Columbia Daily Spectator, told Slate that students were seeing “calls for solidarity” on the app. The following day, some 3,000 Columbia staff, students, and community members signed a letter to university president Minouche Shafik, the board of trustees, and the school’s deans supporting “campus safety and academic freedom.” It included a link to a folder of Sidechat screenshots showing people asking how to join the encampments on campus and discussions of Zionism.

On Tuesday, the New York Police Department arrested hundreds of protesters at Columbia and City College of New York.

Prior to the protests, administrators at other colleges, like Harvard and Brown, had sought to increase moderation on Sidechat, citing increased reports of harassment and hate speech from students using the platform. Rhetoric on the app had become “dehumanizing, racist, homogenizing, (and) hateful,” says Aboud Ashhab, a Palestinian student at Brown. Andrew Rovinsky, a Jewish student at the university, calls it “a cesspool.”

Because the app’s defining feature is student discourse done anonymously (users don’t post with their real names), toxic messages and demeaning language flow freely. “What you see on Sidechat is a bunch of people actually engaging in the most vile rhetoric you’ve seen, because it’s anonymous,” Rovinsky says.

Launched in 2022 as a mechanism for college students to whisper about campus happenings, Sidechat quickly spread across US universities. Like the early version of Facebook, the app requires a university email address to log in, and while it initially served as a hub for gossip and collective complaining, university administrators began to take notice of more heated discussion on the platform in recent months and implored Sidechat to strengthen its content moderation.

While the app’s user guidelines state that the platform does not allow content that “perpetuates the oppression of marginalized communities by promoting discrimination against (or hatred toward) certain groups of people,” both Sidechat and its predecessor Yik Yak have come under fire for facilitating an online environment that bodes well for hate speech.

In fact, before Sidechat’s acquisition of Yik Yak in 2023, Yik Yak took a four-year hiatus after a bombardment of complaints regarding racism, discrimination, and threats of violence circulating on the app. Hateful comments in the months following the October 7 attack suggest Sidechat is not so different from its forerunner.

At the start of 2024, Harvard became the first university to ask Sidechat to increase its moderation. In response, Sidechat cofounder Sebastian Gil told Inside Higher Ed in an email that the app does “more than most (if not all) social media apps in moderation,” citing a team of 30 employees and the company’s use of machine learning models to “detect bigotry.” (Gil said the same in response to a request for comment from WIRED.)

In March, the Brown administration sent out a campus-wide announcement following discussions with Sidechat leadership “to determine what steps are being taken by Sidechat” to moderate content on the platform, as well as providing instructions for how students could report hate speech on the app.

At a March 20 Brown University Community Council meeting open to the public, university president Christina Paxson addressed concerns raised by community members. “To answer the question of, could a university basically get rid of Sidechat?” she said. “Putting aside concerns about censorship, which we take very seriously, the answer is: not really.” Other schools, like the University of North Carolina, have tried to block the app, but those efforts can only do so much when students are on their personal devices and data networks.

On April 24, students at Brown University set up a pro-Palestinian encampment on the Rhode Island campus to protest Israel’s actions in Gaza. On Tuesday, they reached an agreement that requires administrators to meet, and vote on, divesting from companies with ties to Israeli interests. On Sidechat, users posted memes with Paxson’s face edited onto Donald Trump’s body on the cover of his book Trump: The Art of the Deal. Students also posted screenshots of comments from the Instagram page of the pro-Palestinian Brown Divest Coalition that called student protesters terrorists and the school “jihadist Brown University.”

Rovinsky says that within the past six months, he has seen racist, anti-Palestinian and antisemitic content surge on the app as Brown’s campus “grew more tense.” He identified two types of rhetoric he has seen repeatedly wielded on Sidechat—direct hate speech, like posts saying “all Palestinians are terrorists,” and more complex, discreet pejorative comments that are less explicit, “like employing classic antisemitic tropes and dog whistles.”

“These are the things that are slipping past moderators,” Rovinsky says. “Like, if you’re not just saying a slur for Jewish people or a slur for Palestinian people. People are using a lot of coded language.”

Ashhab also has experience with nuanced anti-Arabism on the platform but noted that the majority of racist content he has scrolled past has been cut-and-dried: “They have said things like ‘all people in Gaza are backward, they all kill women … they’re all rapists …’ saying ‘all Palestinians and Arabs blow up buses and stab people.’”

Although the app’s guidelines declare that it does not “allow discrimination against marginalized identities, included but not limited to racism … [and] religious discrimination,” Ashhab wondered if the rhetoric he has encountered is on the platform’s radar at all.

Elizabeth Lokoyi, a Brown junior who worked part-time as a Sidechat moderator in the spring of her freshman year, described her employer’s guidelines around the enforcement of hate speech prohibition as “vague.” The app directs much of that decisionmaking back onto the students who use it.

“What is or isn’t constituted as hate speech is very much at the discretion of the moderator,” she says. “It was kind of hard to pinpoint what distinguished hate speech from questionable discourse. But anything with violence, like threats or expressing negativity about a specific identity of people, would be categorized as hate speech, or flagged to be taken down, rather.”

Because Sidechat is an external platform, there are considerable limitations to what colleges and universities can do to curb antisemitic and anti-Palestinian rhetoric on the app. While some students believe universities have no business providing oversight to Sidechat, most agree that Sidechat can bolster its moderation through a total reliance upon human moderators and a diminution of AI tools used for reviewing posted content.

“I don’t think the university should be regulating Sidechat. That’s really on the company to work on,” Rovinsky says. “They shouldn’t have students working as moderators, because obviously we each have our own biases on what is and isn’t hate speech and are also direct stakeholders in that online ecosystem.”

Ashhab similarly believes that AI cannot be properly trained to filter out nuanced hate speech or racist rhetoric, because these tools are not human. “AI can’t just detect all dehumanizing rhetoric and language,” he says. “That’s something that can’t be decided by filtering through keywords or just taking the message at face value.” And AI never set foot on a college campus.