Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Congress Advances Bill to Add AI to National Vulnerability Database

The AI Incident Reporting and Security Enhancement Act would allow NIST to create a process for reporting and tracking vulnerabilities found in AI systems.

2 Min Read
Concept illustration of AI, with a microchip with "AI" written on it and code snippets fading out
Source: zemkooo via Alamy Stock Photo

A House committee advanced a bill that would allow the National Institute of Standards and Technology (NIST) to create a formal process for reporting security vulnerabilities in artificial intelligence (AI) systems. As is the case for many security projects, funding concerns could stymie the initiative.

The AI Incident Reporting and Security Enhancement Act was approved by voice vote by the House Science, Space and Technology committee on Wednesday. The bill was introduced by a bipartisan trio of representatives from North Carolina, California, and Virginia. If approved by the full Congress and signed into law, it would give NIST the mandate to incorporate AI systems in the National Vulnerability Database (NVD).

NVD is the federal government's centralized repository for tracking security vulnerabilities in software and hardware. In its current form, the bill would add to the workload of the already-beleaguered NIST teams managing the NVD. NIST earlier this year paused updating data on reported vulnerabilities, in a move program manager Tanya Brewer said was the result of budget cuts, flat staff growth, and an increase in database-related email traffic.

The bill specifies that the increased workload for NIST would be "subject to the availability of funding," but Rep. Deborah Ross (D-NC), a sponsor of the bill, said that they were aware of "significant funding and scaling challenges" NIST already experienced maintaining the database.

"My colleagues and I on this committee are actively exploring solutions to help NIST address this problem and get the money," she said.

Even though the bill was approved in committee, some committee members expressed concern about some of the language used in the bill. There were concerns that terms such as "substantial artificial intelligence security incident" and "intelligence incident" would need to be clarified to make it more likely that the bill would pass. This kind of specificity is also a bigger concern in Congress in the wake of the Supreme Court overturning the Chevron doctrine.

The bill would also require NIST to consult with other federal agencies, like the Cybersecurity and Infrastructure Security Agency, private-sector organizations, standards organizations, and civil society groups, to develop a common lexicon for reporting AI cybersecurity incidents.

About the Author

Jennifer Lawinski, Contributing Writer

Jennifer Lawinski is a writer and editor with more than 20 years experience in media, covering a wide range of topics including business, news, culture, science, technology and cybersecurity. After earning a Master's degree in Journalism from Boston University, she started her career as a beat reporter for The Daily News of Newburyport. She has since written for a variety of publications including CNN, Fox News, Tech Target, CRN, CIO Insight, MSN News and Live Science. She lives in Brooklyn with her partner and two cats.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights