News, news analysis, and commentary on the latest trends in cybersecurity technology.
What Using Security to Regulate AI Chips Could Look Like
An exploratory research proposal is recommending regulation of AI chips and stronger governance measures to keep up with the rapid technical innovations in artificial intelligence.
February 16, 2024
Researchers from OpenAI, Cambridge University, Harvard University, and the University of Toronto have offered "exploratory" ideas on how to regulate AI chips and hardware and how security policies could prevent the abuse of advanced AI.
The recommendations provide ways to measure and audit the development and use of advanced AI systems and the chips that power them. Policy enforcement recommendations include limiting the performance of systems and implementing security features that can remotely disable rogue chips.
"Training highly capable AI systems currently requires accumulating and orchestrating thousands of AI chips," the researchers wrote. "[I]f these systems are potentially dangerous, then limiting this accumulated computing power could serve to limit the production of potentially dangerous AI systems."
Governments have largely focused on software for AI policy; the paper is a companion piece covering the hardware side of the debate, says Nathan Brookwood, principal analyst of Insight 64.
However, the industry will not welcome any security features that affect the performance of AI, he warns. Making AI safe through hardware "is a noble aspiration, but I can't see any one of those making it. The genie is out of the lamp — and good luck getting it back in," he says.
Throttling Connections Between Clusters
One of the proposals the researchers suggest is a cap to limit the compute processing capacity available to AI models. The idea is to put security measures in place that can identify abuse of AI systems and then cut off and limit the use of chips.
Specifically, they suggest a targeted approach of limiting the bandwidth between memory and chip clusters. The easier alternative — to cut off access to chips — wasn't ideal because it would affect overall AI performance, the researchers wrote.
The paper did not suggest ways to implement such security guardrails or how abuse of AI systems could be detected.
"Determining the optimal bandwidth limit for external communication is an area that merits further research," the researchers wrote.
Large-scale AI systems demand tremendous network bandwidth, and AI systems such as Microsoft's Eagle and Nvidia's Eos are among the top 10 fastest supercomputers in the world. Ways to limit network performance do exist for devices supporting the P4 programming language, which can analyze network traffic and reconfigure routers and switches.
But good luck asking chip makers to implement AI security mechanisms that could slow down chips and networks, Brookwood says.
"Arm, Intel, and AMD are all busy building the fastest, meanest chips they can build to be competitive," he says. "I don't know how you can slow down."
Remote Possibilities Carry Some Risk
The researchers also suggested disabling chips remotely — a capability that Intel has built into its newest server chips. The On Demand feature is a subscription service that will allow Intel customers to turn on-chip features, such as AI extensions, on and off like heated seats in a Tesla.
The researchers also suggested an attestation scheme where chips allow only authorized parties to access AI systems via cryptographically signed digital certificates. Firmware could provide guidelines on authorized users and applications, which could be changed with updates.
While the researchers did not provide technical recommendations on how this would be done, the idea is similar to how confidential computing secures applications on chips by attesting authorized users. Intel and AMD have confidential computing on their chips, but it is still early days for the emerging technology.
There are also risks to remotely enforcing policies.
"Remote enforcement mechanisms come with significant downsides, and may only be warranted if the expected harm from AI is extremely high," the researchers wrote.
Brookwood agrees.
"Even if you could, there are going to be bad guys who are going to pursue it," he says. "Putting artificial constraints on good guys is going to be ineffective."
About the Author
You May Also Like
Transform Your Security Operations And Move Beyond Legacy SIEM
Nov 6, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024