Skip to main contentSkip to navigationSkip to navigation
‘When a platform’s business model is to maximize engagement time in order to sell ad revenue, divisive, shocking and conspiratorial content gets pushed to the top of the feed.’
‘When a platform’s business model is to maximize engagement time in order to sell ad revenue, divisive, shocking and conspiratorial content gets pushed to the top of the feed.’ Photograph: Saul Loeb/AFP/Getty Images
‘When a platform’s business model is to maximize engagement time in order to sell ad revenue, divisive, shocking and conspiratorial content gets pushed to the top of the feed.’ Photograph: Saul Loeb/AFP/Getty Images

Deepfakes aren't a tech problem. They're a power problem

This article is more than 4 years old

By framing deepfakes as a tech problem we allow Silicon Valley to evade responsibility for its symbiotic relationship with fake news

In the lead-up to the 2016 election, very few predicted the degree to which online misinformation would disrupt the democratic process. Now, as we edge closer to 2020, there is a heightened sense of vigilance around new threats to truth in our already fragile information ecosystem.

At the top of the list of concerns is no longer Russian bots, but deepfakes, the artifical intelligence-manipulated media that can make people appear to do or say things that they never did or said.

The threat is being taken so seriously that last Thursday, the House intelligence committee held Congress’s first hearing on the subject. In his opening remarks, Representative Adam Schiff, the committee chairman, talked of society being “on the cusp of a technological revolution” that will qualitatively transform how fake news is made. He spoke of “advances in AI” that will make it possible to compromise election campaigns. He made repeated mention of how better algorithms and data will make it extremely difficult to verify the veracity of images, videos, audio or text.

In essence, he framed the problem of doctored media as a new threat caused by sophisticated emerging technologies.

Not that Schiff’s alone. The broader discourse around fake content has become increasingly focused on AI-generated content, where cutting-edge machine learning techniques are used to create uncanny copies of people’s faces, voices and writing styles. But as technologically impressive as these new techniques are, I worry that focusing on the “state of the art” is a distraction from a deeper problem.

To understand why, consider the most high-profile example of manipulated media to spread online to date: the doctored video of the House speaker, Nancy Pelosi, made to seem as if she was drunkenly slurring her speech. Far from being created by a technically savvy operator misappropriating the fruits of a “technological revolution”, it was made using rudimentary editing techniques by a sports blogger and avid pro-Trumper from the Bronx named Shawn Brooks.

The reason this fake video spread so far and wide was not because it was technologically advanced, or even particularly visually compelling, but because of the cynical nature of social media. When a platform’s business model is to maximize engagement time in order to sell ad revenue, divisive, shocking and conspiratorial content gets pushed to the top of the feed. Like other trolls, Brooks’s most salient skill was understanding and exploiting these dynamics.

Indeed, the Pelosi video demonstrated just how symbiotic and mutually beneficial the fake news-platform relationship has become. Facebook refused to take down the altered video, noting that its content policy does not require a post to be true. (Facebook did “reduce” the video’s distribution in the news feed, in an attempt at harm minimization.) The reality is that divisive content is, from a financial perspective, a win for social media platforms. As long as that logic underpins our online lives, cynical media manipulators will continue to exploit it to spread social discord, with or without machine learning.

And herein lies the problem: by formulating deepfakes as a technological problem, we allow social media platforms to promote technological solutions to those problems – cleverly distracting the public from the idea that there may be more fundamental problems with powerful Silicon Valley tech platforms.

We have seen this before. When Congress interrogated Mark Zuckerberg last year about Facebook’s privacy problems and involvement in spreading fake news, instead of reflecting on structural issues at the company, Zuckerberg repeatedly assured Congress that technological solutions that would fix everything were just over the horizon. Zuckerberg mentioned AI more than 30 times.

Underpinning all of this is what Evgeny Morozov has called “technological solutionism”: an ideology endemic to Silicon Valley that reframes complex social issues as “neatly defined problems with definite, computable solutions … if only the right algorithms are in place!” This highly formal, systematic, yet socially myopic mindset is so pervasive within the tech industry that it has become a sort of meme. How do we solve wealth inequality? Blockchain. How do we solve political polarization? AI. How do we solve climate change? A blockchain powered by AI.

This constant appeal to a near-future of perfectly streamlined technological solutions distracts and deflects from the grim realities we presently face. While Zuckerberg promised better AI for content moderation in front of Congress last year, reports have since emerged that much content moderation still relies on humans, who are subjected to highly traumatic content and terrible working conditions. By talking incessantly about AI-powered content moderation, the company diverts attention away from this real human suffering.

The “solutionist” ideology has also influenced the discourse around how to deal with doctored media. The solutions being proposed are often technological in nature, from “digital watermarks” to new machine learning forensic techniques. To be sure, there are many experts who are doing important security research to make the detection of fake media easier in the future. This is important and worthwhile. But on its own, it is unclear that this would help fix the deep-seated social problem of truth decay and polarization that social media platforms have played a major role in fostering.

The biggest problem with technological solutionism is that it can be used as a smokescreen for deep structural problems in the technology industry, as well as a strategy for stymieing precisely the type of political interventions that need to happen to curtail the singular power that these companies have in controlling how we access information.

If we continue to frame deepfakes as a technological problem instead of something rotten at the core of the attention economy, we leave ourselves just as vulnerable in 2020 to misinformation as we were in 2016.

  • Oscar Schwartz is a freelance writer and researcher based in New York

More on this story

More on this story

  • Hire factcheckers to fight election fake news, EU tells tech firms

  • TikTok says it has acted to curb disinformation amid Israel-Hamas war

  • Social media firms ‘not ready to tackle misinformation’ during global elections

  • CEO regrets her firm took on Facebook moderation work after staff ‘traumatised’

  • Reddit moderators vow to continue blackout in API access fees row

  • Ex-Facebook moderator in Kenya sues over working conditions

  • YouTube Kids shows videos promoting drug culture and firearms to toddlers

  • Facebook owner to ‘assess feasibility’ of hate speech study in Ethiopia

  • Rohingya sue Facebook for £150bn over Myanmar genocide

Most viewed

Most viewed