Skip to main content

AI security bill aims to prevent safety breaches of AI models

AI security bill aims to prevent safety breaches of AI models

/

The Secure Artificial Intelligence Act would create a database tracking AI security breaches.

Share this story

Collage of a person in a suit choosing between a devil computer and an angel computer.
Cath Virginia / The Verge | Photos by Getty Images

A new bill seeking to track security issues by mandating the creation of a database recording all breaches of AI systems has been filed in the Senate. 

The Secure Artificial Intelligence Act, introduced by Sens. Mark Warner (D-VA) and Thom Tillis (R-NC), would establish an Artificial Intelligence Security Center at the National Security Agency. This center would lead research into what the bill calls “counter-AI,” or techniques to learn how to manipulate AI systems. This center would also develop guidance for preventing counter-AI measures. 

The bill will also require the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency to create a database of AI breaches, including “near-misses.” 

Warner and Tillis’ proposed bill focuses on techniques to counter AI and classifies them as data poisoning, evasion attacks, privacy-based attacks, and abuse attacks. Data poisoning refers to a method where code is inserted into data scraped by an AI model, corrupting the model’s output. It emerged as a popular method to prevent AI image generators from copying art on the internet. Evasion attacks change data studied by AI models to the point the model gets confused. 

AI safety was one of the key items in the Biden administration’s AI executive order, which directed NIST to establish “red-teaming” guidelines and required AI developers to submit safety reports. Red teaming is when developers intentionally attempt to get AI models to respond to prompts they’re not supposed to. 

Ideally, developers of powerful AI models test the platforms for safety and have them undergo extensive red teaming before being released to the public. Some companies, like Microsoft, have created tools to help make adding safety guardrails to AI projects easier. 

The Secure Artificial Intelligence Act will have to go through a committee before it can be taken up by the larger Senate.