Don't Be Fooled by Covid-19 Carpetbaggers

Coronavirus credentialism is rampant and dangerous. Knowing who's legit and who's an opportunist can save lives.
a doctor wearing PPE
The carpetbagger growth curve threatens to make the coronavirus growth curve even steeper.Photograph: John Moore/Getty Images 

Last week, FiveThirtyEight’s Nate Silver teased his latest project on Covid-19 to his 3.2 million Twitter followers: “Working on something where you can model the number of detected cases of a disease as a function of the number of actual cases and various assumptions about how/how many tests are conducted.”

While his attempt at Twitter epidemiology was criticized mostly by academic scientists, it was hardly offensive enough to warrant anything more than an eye roll. For all of the tweet’s irony—Silver built his reputation by calling out the naivete of bad interpretations of polling data—his attempt was harmless, exploratory, and he didn’t make any claim to being an expert.

That Silver appears to know his place as an outsider on the topic is more than can be said for thousands of people who have rewired their brands, credentials, industries, and research interests to become Covid-19 experts overnight. The growth curve of “experts” mirrors the exponential increase in Covid-19 cases, creating a multiverse of thousands of projections, models, ideas, recommendations, therapies, solutions, and scenarios. Much of it is ripe with dangerous misinformation and threatens to worsen the pandemic.

There are many reasons for the big bang of Covid-19 “expertise.” Those wading into the pandemic forum include people who study related topics or have expertise in some scientific domain. Pleuni Pennings, an evolutionary computational biologist and assistant professor at San Francisco State University, says many academics are initially responding to demands from personal and professional circles: “Our students and friends and family members are coming to us for advice. For example, even though I work on HIV, early on, my non-science network came with many practical questions such as, ‘Do you think I can still see my grandchildren?’”

For others, many of whom are not professional scientists, the motivation to participate comes from classical do-gooderism: People with resources, which include both skill sets and time, want to help in some way. And while the road to hell can be paved with good intentions, a world of overnight epidemiologists comprising only highly skilled, magnanimous polymaths would be tolerable (if still exhausting): It would be nice to know that all of these new experts were at least smart and caring.

Unfortunately, the majority of Covid-19 carpetbaggers are at the very least opportunists, and sometimes nefarious propagators of misinformation. They seize on the opportunity to use the topic that everyone is talking about to make a name for themselves, which is beneficial in whatever realm they operate in.

One story of a suspected Covid-19 opportunist involves Aaron Ginn, a Silicon Valley technologist whose five minutes of fame arrived in March after he wrote a contrarian essay proposing that evidence didn’t support the “hysteria” over the consequences of the pandemic, that the problem might be sorta bad, but not really, really bad.

Ginn flaunted some unusual credentials in support of his authority on the matter: a talent for making products go viral. “I’m quite experienced at understanding virality, how things grow, and data,” he wrote. The logic here would only be amusing if it weren’t potentially harmful.

Ginn’s story became a lightning rod for the expertise debate: After his piece was panned by critics (including one especially damning refutation by Carl Bergstrom, coauthor of the upcoming Calling Bullshit), it was removed by Medium, a decision that was criticized by The Wall Street Journal as an act of censure. The editorial is off base, of course, as Ginn’s missteps were not simply a matter of a preference; poorly vetted ideas and misinformation are often propagated and promoted in digital spaces, which can influence behavior.

Read all of our coronavirus coverage here.

While Silicon Valley has been roundly criticized by the scientific community over this style of aggressive parachuting into Covid-19, tech bros aren’t the only ones guilty of opportunism. In fact, some of the worst offenders are academic scientists with strong (even stellar) reputations in their own fields who suffer from a serious case of covid FOMO.

One of the most high-profile examples of a well-regarded academic jumping the Covid-19 shark would be the rise and fall of Stephen Quake, armchair epidemiologist. Notably, Quake is a professor at Stanford and a superstar biophysicist by every professional metric. He doubles as copresident of the Chan-Zuckerberg Biohub, a $600 million collaborative research initiative, a role that amplified the influence of, and backlash to, his March 22 Medium essay, “How Bad Is the Worst-Case Coronavirus Scenario?”

Based on the popular model developed by Neil Ferguson and colleagues, Quake compared the 500,000 possible Covid-19 cases to other major causes of death and seemed to suggest that, because a comparable number of Americans die of cancer, the fuss around the number of potential Covid-19 deaths is unwarranted. Quake’s argument reads like a Thanos-inspired “All Lives Matter” manifesto: People die a lot anyway, and this unusual way of dying will be solved in a short while, so what's the big deal? Quake’s attempt at an “I bet they’ve never heard this” provocation was only successful in telling us that he is either a bad person or didn’t think very clearly about the problem (maybe both).

Most charitably, we might attribute misfirings like Ginn’s and Quake’s to outsized egos, which compels them to question whether studying Covid-19 is really more challenging than studying the market or polymers (or whatever complicated idea that they’ve built a reputation on). Their egos may conclude that people in the field of epidemiology can’t possibly be any smarter than they are, and another flawed Medium article is born.

Elaine Nsoesie, a computational epidemiologist and assistant professor at the Boston University School of Public Health, says that people who “have not studied infectious diseases will make assumptions and inferences that are incorrect. People who already have a large following on Twitter, for example, can spread misinformation that could impact the control of the Covid-19 pandemic.”

Naïve assumptions can create misinformation. This is where the ego-FOMO opportunism becomes unethical—not only are your naive ideas wrong, they are especially bad because they might be affecting the behavior and wellness of others.

The problems with Covid-19 profiteers—whether they are scientists or not—are many. And in a Covid-19 world already saturated with ideas, it can be difficult for anyone to tell real from fake. Who should we trust? And who, exactly, is an expert?

Nsoesie says she’s “part of several infectious disease modeling communities, so I know people who have been working in this space for a while. Those are the people I tend to pay attention to. If I see someone I don’t know, I look at the person’s previous research if they are academics. If they are medical professionals, then I look at their area of expertise.”

Pennings adds, “We all just need to be careful to indicate how certain we are and not pretend like we know everything. And if your opinion goes against guidelines from agencies like CDC, I think you need to be extra careful with sharing that opinion and using your ‘credentials’ to be critical.”

That neither Nsoesie and Pennings, both well-respected academic scientists, require specific credentials in the people they listen to distinguishes them from many of their academic colleagues. Reflexive criticism of the Covid-19 outsider-expert from professional scientists too often sounds like classical gatekeeping.

Samuel Scarpino, a mathematical biologist and assistant professor at Northeastern University’s Network Science Institute, is strongly critical of opinions that are too steeped in credentialism. “A lot of the frustration from academic scientists is rooted in the problematic notion that just because someone has worked on a topic for a long time that they should determine who should have an opinion on it.”

Despite the fact that Scarpino has contributed to several visible and influential studies on Covid-19 epidemiology, he suggests that “there’s not a single card-carrying epidemiologist who would call me an epidemiologist.”

“Being an authority on Covid-19,” he adds, “should not be about whether someone is an epidemiologist or not. It should be about whether you’re trying to be thoughtful and are communicating effectively.”

Gatekeeping is antithetical to the growth of science. Many tools of modern biology, for example, came from insights developed by computer scientists, engineers, and mathematicians. Science works best as a creative, collaborative enterprise.

Nonetheless, lines between the hacks and the wonks can appear thin. But there is a soft algorithm we can use to help us think about who to take seriously and who to ignore.

- Transparency about motivations and methods. A true Covid-19 expert provides open qualifiers for their interpretations, makes their assumptions very clear (sometimes before they’ve told you what they actually think), provides proper disclaimers and almost never makes rigid predictions. If they’ve never worked on a disease before, they should share this fact, and explain their motivations. (They do not, however, need to apologize for being interested and wanting to help.) Relatedly, a true Covid-19 expert who analyzes data and creates an algorithm or model of any kind will make their data openly and freely available, so that it can be verified and reproduced. If it is challenging to find the data used by an expert, or hard to run a model that they’ve produced, then the work (and its author) should be ignored. For a positive example, Harvard mathematical biologist Alison Hill created a tool for Covid-19 modeling that included full access to the available code, notably under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License, which permits the sharing, copying, editing, and remixing of all material (even for commercial purposes).

- Transparency in contributions. A true Covid-19 expert will give proper credit to everyone that contributed to whatever model or set of findings that they’ve come up with. Covid-19 expertise is not Gregor Mendel, alone in a garden, counting plants en route to discovering the fundamentals of genetics. Understanding an epidemic requires the participation of talented people, often from across the globe. Anyone who proposes an idea that is supposed to be useful or original but doesn’t openly and clearly acknowledge the contributions of others should be automatically ignored. This includes properly citing relevant work and data sources.

- Participation in the scientific ecosystem. A Covid-19 expert should make an attempt to participate in the existing ecosystem of science, where findings are produced and shared in the form of a scientific “preprint” or manuscript. That is, findings and results should not live only on a company website or personal blog. They should be cultivated into a scientific format. While the formalism of scientific publishing is far from perfect, it contains features that are essential for honest discourse: It allows us to observe whether an author has read and is citing the proper literature; it allows the author to communicate their methods and interpretations with rigor, in detail; and it allows the community a formal means of being able to criticize, reference and improve upon the work.

I label the three rules above a “soft” algorithm because there is no strict method for ensuring that the good ideas are filtering through. But it provides a start and is effective at identifying some of the red flags of hacks and trolls.

In some ways Covid-19 is not unlike any other paradigm where people feel compelled to become experts overnight. Many fields—ranging from basketball to criminal justice—are mired in culture wars where people who study algorithms and analytics clash with an old guard that relies on domain expertise. And the proliferation of opinions—good and bad—is, in some ways, the price of a democracy of ideas. The alternatives resemble religious authoritarianism, where knowledge is based on the incorrigible opinion of individuals, a system guaranteed to be bad at solving epidemics.

In the end, the challenge of identifying an expert is much like the science of understanding epidemics: tough to achieve with absolute certainty. But in a hyperconnected world, where words and ideas have weight, being able to identify the imposters can save lives.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.

More From WIRED on Covid-19