A National Security Insider Does the Math on the Dangers of AI

Jason Matheny, CEO of the influential think tank Rand Corporation, says advances in AI are making it easier to learn how to build biological weapons and other tools of destruction.
Photograph of bacteria growing in a petri dish
Photograph: Leigh Prather/Shutterstock

Jason Matheny is a delight to speak with, provided you’re up for a lengthy conversation about potential technological and biomedical catastrophe.

Now CEO and president of Rand Corporation, Matheny has built a career out of thinking about such gloomy scenarios. An economist by training with a focus on public health, he dived into the worlds of pharmaceutical development and cultivated meat before turning his attention to national security.

As director of Intelligence Advanced Research Projects Activity, the US intelligence community's research agency, he pushed for more attention to the dangers of biological weapons and badly designed artificial intelligence. In 2021, Matheny was tapped to be President Biden’s senior adviser on technology and national security issues. And then, in July of last year, he became CEO and president of Rand, the oldest nonprofit think tank in the US, which has shaped government policy on nuclear strategy, the Vietnam War, and the development of the internet.

Matheny talks about threats like AI-enabled bioterrorism in convincing but measured tones, Mr. Doomsday in a casual suit. He’s steering Rand to investigate the daunting risks to US democracy, map out new strategies around climate and energy, and explore paths to “competition without catastrophe” in China. But his long-time concerns about biological weapons and AI remain top of mind.

Onstage with WIRED at the recent Verify cybersecurity conference in Sausalito, California, hosted by the Aspen Institute and Hewlett Foundation, he warned that AI is making it easier to learn how to build biological weapons and other potentially devastating tools. (There’s a reason why he joked that he would pick up the tab at the bar afterward.) The conversation has been edited for length and clarity.

Lauren Goode: To start, we should talk about your role at Rand and what you’re envisioning for the future there. Rand has played a critical role in a lot of US history. It has helped inform, you know, the creation of the internet—

Jason Matheny: We’re still working out the bugs.

Right. We’re going to fix it all tonight. Rand has also influenced nuclear strategy, the Vietnam War, the space race. What do you hope that your tenure at Rand will be defined by?

There’s three areas that I really want to help grow. First, we need a framework for thinking about what [technological] competition looks like without a race to the bottom on safety and security. For example, how can we assure competition with China without catastrophe? A second area of focus is thinking about how we can map out a climate and energy strategy for the country, in a way that is acceptable to our technology requirements, the infrastructure that we have and are building, and gets the economics right.

And then a third area is understanding the risks to democracy right now, not just in the United States but globally. We're seeing an erosion of norms in how facts and evidence are treated in policy debates. We have a set of very anxious researchers at Rand who are seeing this decay of norms. I think that's something that's happening not just in the United States but globally, alongside a resurgence of variants of autocracy.

One type of risk you’ve been very interested in for a long time is “biorisk.” What’s the worst thing that could possibly happen? Take us through that.

I started out in public health before I worked in national security, working on infectious disease control—malaria and tuberculosis. In 2002, the first virus was synthesized from scratch on a Darpa project, and it was sort of an “oh crap” moment for the biosciences and the public health community, realizing biology is going to become an engineering discipline that could be potentially misused. I was working with veterans of the smallpox eradication campaign, and they thought, “Crap, we just spent decades eradicating a disease that now could be synthesized from scratch.”

I then moved to working in biosecurity, trying to figure out, How could we increase the security around biolabs so that they're less likely to be used? How can we detect biological weapons programs? Which, unfortunately, still exist in significant numbers in a few places in the world. Also, how can we bake more security into society so that we're more resilient when it comes not only to an engineered pandemic but also natural pandemics?

There's a lot of vulnerability that remains in society. Covid was a demonstration of this. This was a relatively mild virus in historic terms—it had an infection fatality rate less than 1 percent—whereas there are some natural viruses that have fatality rates well above 50 percent. There are synthetic viruses that have close to 100 percent lethality while still being as transmissible as SARS-CoV-2. Even though we know how to design vaccines and manufacture them very quickly, getting them approved takes about as much time today as it did about 20 years ago. So the amount of time that you would need in order to vaccinate a population is about the same today as it was for our parents and even for our grandparents.

When I first started getting interested in biosecurity in 2002, it cost many millions of dollars to construct a poliovirus, a very, very small virus. It would've cost close to $1 billion to synthesize a pox virus, a very large virus. Today, the cost is less than $100,000, so it's a 10,000-fold decrease over that period. Meanwhile, vaccines have actually tripled in cost over that period. The defense-offense asymmetry is moving in the wrong direction.

And what do you see as our greatest adversary in biorisks?

First is nature. The evolution of natural viruses continues. We're going to have future viral pandemics. Some of them are going to be worse than Covid, some of them are going to be not as bad as Covid, but we've got to be resilient to both. Covid cost just the US economy more than $10 trillion, and yet what we invest in preventing the next pandemic is maybe $2 billion to $3 billion of federal investment.

Another category is intentional biological attacks. Aum Shinrikyo was a doomsday cult in Japan that had a biological weapons program. They believed that they would be fulfilling prophecy by killing everybody on the planet. Fortunately, they were working with 1990s biology, which wasn't that sophisticated. Unfortunately, they then turned to chemical weapons and launched the Tokyo sarin gas attacks.

We have individuals and groups today that have mass-casualty intent and increasingly express interest in biology as a weapon. What's preventing them from being able to use biology effectively are not controls on the tools or the raw materials, because those are all now available in many laboratories and on eBay—you can buy a DNA synthesizer for much less than $100,000 now. You can get all the materials and consumables that you need from most scientific supply stores.

What an apocalyptic group would lack is the know-how to turn those tools into a biological weapon. There’s a concern that AI makes the know-how more widely available. Some of the research done by [AI safety and research company] Anthropic has looked at risk assessments to see if these tools could be misused by somebody who didn't have a strong bio background. Could they basically get graduate-level training from a digital tutor in the form of a large language model? Right now, probably not. But if you map the progress over the last couple of years, the barrier to entry for somebody who wants to carry out a biological attack is eroding.

So … we should remind everyone there’s an open bar tonight.

Unhappy hour. We’ll pick up the tab.

Right now everyone is talking about AI and a super artificial intelligence potentially overtaking the human race.

That's going to take a stiffer drink.

Photograph: Diane Baldwin/RAND

You are an effective altruist, correct?

According to the newspapers, I am.

Is that how you would describe yourself?

I don't think I've ever self-identified as an effective altruist. And my wife, when she read that, she was like, “You are neither effective nor altruistic.” But it is certainly the case that we have effective altruists at Rand who have been very concerned about AI safety. And it is a community of people who have been worried about AI safety longer than many others, in part because a lot of them came from computer science.

So you're not an effective altruist, you're saying, but are someone who's been very cautious about AI for a long time, like some effective altruists are. What was it that made you think years ago that we needed to be cautious about unleashing AI into the world?

I think it was when I realized that so much of what we depend on protecting us from the misuse of biology is knowledge. [AI] that can make highly specialized knowledge easier to acquire without guardrails is not an unequivocal good. Nuclear knowledge will be created. So will biological weapon knowledge. There will be cyber weapon knowledge. So we have to figure out how to balance the risks and benefits of tools that can create highly specialized knowledge, including knowledge about weapons systems.

It was clear even earlier than 2016 that this was going to happen. James Clapper [former US director of national intelligence] was also worried about this, but so was President Obama. There was an interview in WIRED in October of 2016. [Obama warned that AI could power new cyberattacks and said that he spent “a lot of time worrying” about pandemics. —Editor] I think he was worried about what happens when you can do software engineering much, much faster that's focused on generating malware at scale. You can basically automate a workforce, and now you've got effectively a million people who are coding novel malware constantly, and they don't sleep.

At the same time, it will improve our cybersecurity, because we can also have improvements in security that are amplified a million-fold. So one of the big questions is whether there will be some sort of cyber offense or cyber defense natural advantage as this stuff scales. What does that look like over the long term? I don't know the answer to that question.

Do you think it's at all possible that we will enter any kind of AI winter or a slow-down at any point? Or is this just hockey-stick growth, as the tech people like to say?

It's hard to imagine it really significantly slowing down right now. Instead it seems there's a positive feedback loop where the more investment you put in, the more investment you're able to put in because you've scaled up.

So I don't think we'll see an AI winter, but I don't know. Rand has had some fanciful forecasting experiments in the past. There was a project that we did in the 1950s to forecast what the year 2000 would be like, and there were lots of predictions of flying cars and jet packs, whereas we didn't get the personal computer right at all. So forecasting out too far ends up being probably no better than a coin flip.

How concerned are you about AI being used in military attacks, such as used in drones?

There are a lot of reasons why countries are going to want to make autonomous weapons. One of the reasons we're seeing is in Ukraine, which is this kind of petri dish of autonomous weapons. The radio jamming that's used makes it very tempting to want to have autonomous weapons that no longer need to phone home.

But I think cyber [warfare] is the realm where autonomy has the highest benefit-cost ratio, both because of its speed and because of its penetration depth in places that can't communicate.

But how are you thinking about the moral and ethical implications of autonomous drones that have high error rates?

I think the empirical work that's been done on error rates has been mixed. [Some analyses] found that autonomous weapons were probably having lower miss rates and probably resulting in fewer civilian casualties, in part because [human] combatants sometimes make bad decisions under stress and under the risk of harm. In some cases, there could be fewer civilian deaths as a result of using autonomous weapons.

But this is an area where it is so hard to know what the future of autonomous weapons is going to look like. Many countries have banned them entirely. Other countries are sort of saying, “Well, let's wait and see what they look like and what their accuracy and precision are before making decisions.”

I think that one of the other questions is whether autonomous weapons are more advantageous to countries that have a strong rule of law over those that don't. One reason to be very skeptical of autonomous weapons would be because they're very cheap. If you have very weak human capital, but you have lots of money to burn, and you have a supply chain that you can access, then that characterizes wealthier autocracies more than it does democracies that have a strong investment in human capital. It's possible that autonomous weapons will be advantageous to autocracies more than democracies.

You've indicated that Rand is going to increase its investment in analysis on China, particularly in areas where there are gaps in understanding of its economy, industrial policy, and domestic politics. Why this increased investment?

[The US-China relationship] is one of the most important competitions in the world and also an important area of cooperation. We have to get both right in order for this century to go well.

The US hasn't faced a strategic competitor with more than two-thirds of our GDP since the War of 1812. So [we need] an accurate assessment of net strengths and net weaknesses in various areas of competition, whether it's in economic, industrial, military, human capital, education, or talent.

And then where are the areas of mutual benefit where the US and China can collaborate? Non-proliferation, climate, certain kinds of investments, and pandemic preparedness. I think getting that right really matters for the two largest economies in the world.

I recently had the opportunity to talk with Jensen Huang, the CEO of Nvidia, and we talked about US export controls. Just as one example, Nvidia is restricted from shipping its most powerful GPUs to China because of the measures put in place in 2022. How effective is that strategy in the long term?

One piece of math that's hard to figure out: Even if the US succeeded in preventing the shipment of advanced chips like [Nvidia] H100s to China, can China get those chips through other means? A second question is, can China produce its own chips that, while not as advanced, might still perform sufficiently for the kinds of capabilities that we might worry about?

If you’re a national security decisionmaker [in China] and you're told, "Hey, we really need this data center to create the arsenal of offensive tools we need. It's not going to be as cost-effective as using H100s, we'll have to pay four times more because of having a bigger energy bill and it'll be slower," you’re probably going to pay the bill. So the question then becomes, at what point is a decisionmaker no longer willing to pay the bill? Is it 10X the cost? Is it 20X? We don't know the answer to that question.

But certain kinds of operations are no longer possible because of those export controls. That gap between what you can get a Huawei chip to do and what you can get an Nvidia chip to do keeps on growing because [the chip technology is] sort of stuck in China, and the rest of the world will keep on getting more advanced. And that does prevent a certain kind of military efficiency in computing that could be useful for a variety of military operations. And I think New York Times reporter Paul Mozur was the first to break the news that Nvidia chips were powering the Xinjiang data center that's being used to monitor the Uighur prison camps in real time.

That raises a really hard question: Should those chips be going into a data center that are being used for human rights abuses? Regardless of one's view of the policy, just doing the math is really important, and that's mostly what we focus on at Rand.