1. Home >
  2. Internet & Security

Human Rights Groups Ask Zoom to Scrap Emotion AI

An open letter signed by 28 non-profit groups accuses the technology of being manipulative, discriminatory, rooted in pseudoscience, and a data integrity risk. 
By Adrianna Nine
Iyus Sugiharto
(Photo: Iyus Sugiharto/Unsplash)
A group of human rights organizations is asking Zoom to pump the brakes on its plan to introduce emotion-analyzing artificial intelligence into its video conferencing platform.  Zoom has been flirting with the concept of emotion AI ever since the pandemic gave it a second wind. As we touched on last month, tech giant Intel has been working alongside an e-learning software company to produce an emotion-analyzing program that stacks with Zoom. This program would supposedly benefit teachers by telling them when students appear confused or bored, allowing them to tailor their instruction and increase engagement. Protocol similarly reported in April that companies have begun using emotion AI during sales calls(Opens in a new window) to assess potential customers’ moods and adjust their strategy accordingly. Unbeknownst to them, every customer is graded on an “emotion scorecard” throughout their call.  Digital rights non-profit Fight for the Future quickly caught wind of Protocol’s report. So did the American Civil Liberties Union (ACLU), Access Now, Jobs With Justice, and 24 other human rights groups—all of whom signed an open letter(Opens in a new window) to Zoom published Wednesday. The letter asks Zoom founder and CEO Eric Yuan to scrap the company’s plans to introduce emotion AI, saying the technology is punitive, manipulative, discriminatory, rooted in pseudoscience, and a data integrity risk.  (Photo: Charles Deluvio/Unsplash) “Zoom claims to care about the happiness and security of its users but this invasive technology says otherwise,” the letter reads. “This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights. Zoom needs to halt plans to advance this feature.” The open letter is far from the first to criticize emotion AI. Many have said the technology constitutes excessive surveillance, especially when the targeted students or customers don’t know their body language, tone, and other alleged emotional markers are being assessed. Others have said emotion AI could end up dishing out negative (or simply incorrect) analyses of people whose cultures express emotions differently.  The advocacy groups’ letter closes by reminding Yuan that his company has previously “made decisions that center users’ rights,” such as backtracking its decision to implement face-tracking features due to privacy concerns. “This is another opportunity to show you care about your users and your reputation,” the organizations write. “You can make it clear that this technology has no place in video communications.” Now Read:

Tagged In

Zoom Video Conferencing Artificial Intelligence Emotion Ai Remote Work

More from Internet & Security

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up