There’s No Such Thing As a Tech Expert Anymore

Members of Congress clearly don’t understand the tech companies they’re supposed to regulate. But neither does anyone else.
collage of images of Sundar Pichai and various different Google products
Photo-Illustration: Sam Whitney; Getty Images

Every time Congress holds a hearing about Silicon Valley companies, people mock the legislators for being out of their depth.

Last week’s effort by the antitrust subcommittee of the House Judiciary Committee was no exception. "The technological ignorance demonstrated by our elected officials ... was truly stunning," Shelly Palmer, CEO at the Palmer Group, a tech strategy advisory group, told USA Today. "People who are this clueless about the economic forces shaping our world should not be tasked with leading us into the age of AI," he said. "The data elite are playing a different game with a different set of rules. Apparently, Congress can't even find the ballpark."

You can find similarly snide comments after every such hearing over the past four years. All these complaints are unfair and unfounded.

In 1787 we decided that we would be ruled by citizens, not by priests, professors, or professionals. We don’t insist that everyone in Congress understand how the B2 Spirit “stealth” bomber works, or how serotonin reuptake inhibitors help manage depression, or even how the internal combustion engine works. Yet we justifiably expect our government to regulate them. People who complain about the ignorance of congressional representatives betray their own ignorance of how democracy works.

Still, we do demand expertise and depend on expertise to make policy decisions. Such expertise comes from training, experience, or both. As the importance of companies like Amazon, Google, and Facebook has grown in our lives, politics, and economics, being a technology expert has been a growth industry.

I should know. I play a tech expert on TV every once in a while. But am I worthy of being called an expert? Why should anyone listen to me?

The last time I wrote a line of code it was 1984. I was a senior in high school, struggling to make my way through an early version of AP Computer Science, wondering how long the world would use PASCAL and BASIC to make rudimentary accounting software or simple, text-based games. I gave it up, despite having an early, nerdy infatuation with computers and code, because I saw I lacked the spark, the passion, the curiosity, and the creativity of two of my smartest friends. They both disappointed their immigrant parents by giving up the opportunity to be pre-medicine at Harvard and instead rebelled to become computer science students at Stanford.

Neither of these choices were mine to make. And that’s OK. I immersed myself in writing, reading history, playing in politics. and ultimately practicing journalism at a big public university. I became more of an Austin slacker (at the exact moment when the Richard Linklater film, Slacker, was filmed in Austin), leaving behind the nerdy trappings of my first youth. I preferred a lack of routine to the construction of subroutines.

Maybe I left a few million dollars in Microsoft or Dell stock (Michael Dell was a contemporary of mine at the University of Texas) on the table in some room behind a door I did not open. But I wanted to understand things more than I wanted to make things.

Much later, while in graduate school in American Studies, I realized that computer science was a method of understanding much like many others. I hadn’t only lacked the passion for making elegant code. I’d also misunderstood coding as a craft instead of a field of knowledge and inquiry. By 1995, just as the internet became a thing in our lives, I was back to trying to understand how these machines work—and what would be the consequences of stringing them all together and unleashing their collective computational power.

Somehow, I made a career out of leveraging my fascination with computing into an expertise on its effects on politics, culture, and society—but one that never plumbs the depths of the machinery or its code.

You might say I faked it. You might say I paid attention to the macro rather than the micro. I prefer to be generous to myself.

I managed to convince many scholars, deans, reporters, and even the editors of this esteemed publication (to which I have subscribed since about 1996) that I am, in fact, an expert. I have written or edited six books related to the effects of technology on democracy and culture, including one devoted to the consequences of our collective dependence on Google and another to the uses and dangers of Facebook.

Am I really an expert on Google and Facebook? Or, more appropriately, who is an expert on these companies? Is anyone?

I have some nominees. There are journalists like Steven Levy or Kara Swisher, who have been covering the personalities and policies of these companies for decades. But do they understand the code, the server farms, the global networks of undersea cables? Can they discuss the fragile treaties and legal settlements that have let these companies transfer sensitive user data from Europe to North America and back?

There are former friends of Mark Zuckerberg, like the investor and writer Roger McNamee or the investor and writer Chris Hughes. But do they know how to code? Do they grasp the ways in which societies and cultures reshape themselves around mobile devices and flows of data?

The best candidates are scholars like danah boyd of Data and Society, Zeynep Tufekci of the University of North Carolina, and Ian Bogost of Georgia Tech. They all have deep backgrounds in coding and working for technology companies, and have deployed academic expertise and writing skills to influence public understanding of these industries.

There are former employees of these companies like Antonio Garcia Martinez, who helped build Facebook’s advertising systems after building a couple of previous Silicon Valley startups. Tristan Harris used to work on Google’s email services before quitting to criticize the company for building all its systems to maximize user engagement and leverage attention for revenue. They both understand the mechanisms of their portions of the companies for which they worked. But did they ever get to see how the whole system works? And what qualifies them to comment on the big picture?

Does anyone, even Mark Zuckerberg and Sundar Pichai, really understand these massive, complex, global information systems with their acres of infrastructure, billions in revenue, and billions of users almost as diverse as humanity itself?

I think not. That’s the thing about complex systems. Almost no one understands any of them. As technology writer Samuel Arbesman writes in his important book, Overcomplicated: Technology at the Limits of Comprehension, the messiness of complex systems, in which teams of people understand one aspect yet no one gets the whole thing, invited such calamities as the May 2010 “flash crash” of global financial markets. A complex system like a computer-driven securities market has multiple points of failure: a tangle of computer code, human actions, laws and regulation, and massive amounts of financial data that no one understands. Ultimately, many people have theories of what went wrong that day. No one knows for sure—or how to avoid another such collapse.

Consider Google. It’s a 22-year-old company that started out complex. It was a collection of servers and some brilliant code that scraped the growing web, making copies of every new page and indexing the terms (and later images) to rank them based on a dynamic assessment of “relevance” to users typing terms into a box. Only later did the company add advertisement auctions, productivity applications, maps, self-driving cars, books, mobile operating systems, videos, Wi-Fi routers, home surveillance devices, thermostats, and who-knows-what-next to its collection of services that somehow promise to work in concert. I would love to meet the person at Google who understands Google, or—even better—a person at Alphabet who truly understands Alphabet. That would be a busy, and brilliant, person.

So as we look at the myriad ways Google and Facebook have let us down and led us astray, let’s remember that no one has the manual. No one fully understands these systems, even the people who designed them at their birth. The once impressive, now basic, algorithms that made Google and Facebook distinct and useful have long been eclipsed by even more sophisticated and opaque data sets and machine learning. They are not just black boxes to regulators, journalists, and scholars. They are black boxes to the very engineers who work there.

As Arbesman writes of other complex systems, “While many of us continue to convince ourselves that experts can save us from this massive complexity—that they have the understanding that we lack—that moment has passed.”

So the next time Congress calls technology company leaders up to testify, we should remember that no one really understands these behemoths. They sure do understand us.