News, news analysis, and commentary on the latest trends in cybersecurity technology.

What Using Security to Regulate AI Chips Could Look Like

An exploratory research proposal is recommending regulation of AI chips and stronger governance measures to keep up with the rapid technical innovations in artificial intelligence.

3 Min Read
Quality control lab for semiconductors, with chip under a magnifier shown big on a computer screen; background extended in Photoshop
Source: Science Photo Library via Alamy Stock Photo

Researchers from OpenAI, Cambridge University, Harvard University, and the University of Toronto have offered "exploratory" ideas on how to regulate AI chips and hardware and how security policies could prevent the abuse of advanced AI.

The recommendations provide ways to measure and audit the development and use of advanced AI systems and the chips that power them. Policy enforcement recommendations include limiting the performance of systems and implementing security features that can remotely disable rogue chips.

"Training highly capable AI systems currently requires accumulating and orchestrating thousands of AI chips," the researchers wrote. "[I]f these systems are potentially dangerous, then limiting this accumulated computing power could serve to limit the production of potentially dangerous AI systems."

Governments have largely focused on software for AI policy; the paper is a companion piece covering the hardware side of the debate, says Nathan Brookwood, principal analyst of Insight 64.

However, the industry will not welcome any security features that affect the performance of AI, he warns. Making AI safe through hardware "is a noble aspiration, but I can't see any one of those making it. The genie is out of the lamp — and good luck getting it back in," he says.

Throttling Connections Between Clusters

One of the proposals the researchers suggest is a cap to limit the compute processing capacity available to AI models. The idea is to put security measures in place that can identify abuse of AI systems and then cut off and limit the use of chips.

Specifically, they suggest a targeted approach of limiting the bandwidth between memory and chip clusters. The easier alternative — to cut off access to chips — wasn't ideal because it would affect overall AI performance, the researchers wrote.

The paper did not suggest ways to implement such security guardrails or how abuse of AI systems could be detected.

"Determining the optimal bandwidth limit for external communication is an area that merits further research," the researchers wrote.

Large-scale AI systems demand tremendous network bandwidth, and AI systems such as Microsoft's Eagle and Nvidia's Eos are among the top 10 fastest supercomputers in the world. Ways to limit network performance do exist for devices supporting the P4 programming language, which can analyze network traffic and reconfigure routers and switches.

But good luck asking chip makers to implement AI security mechanisms that could slow down chips and networks, Brookwood says.

"Arm, Intel, and AMD are all busy building the fastest, meanest chips they can build to be competitive," he says. "I don't know how you can slow down."

Remote Possibilities Carry Some Risk

The researchers also suggested disabling chips remotely — a capability that Intel has built into its newest server chips. The On Demand feature is a subscription service that will allow Intel customers to turn on-chip features, such as AI extensions, on and off like heated seats in a Tesla.

The researchers also suggested an attestation scheme where chips allow only authorized parties to access AI systems via cryptographically signed digital certificates. Firmware could provide guidelines on authorized users and applications, which could be changed with updates.

While the researchers did not provide technical recommendations on how this would be done, the idea is similar to how confidential computing secures applications on chips by attesting authorized users. Intel and AMD have confidential computing on their chips, but it is still early days for the emerging technology.

There are also risks to remotely enforcing policies.

"Remote enforcement mechanisms come with significant downsides, and may only be warranted if the expected harm from AI is extremely high," the researchers wrote.

Brookwood agrees.

"Even if you could, there are going to be bad guys who are going to pursue it," he says. "Putting artificial constraints on good guys is going to be ineffective."

About the Author(s)

Agam Shah, Contributing Writer

Agam Shah has covered enterprise IT for more than a decade. Outside of machine learning, hardware, and chips, he's also interested in martial arts and Russia.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights