Sponsored By

Biden's AI Exec Order Is a Start, but We Must Safeguard Innovation

It's important for Congress to strengthen protections for AI and set guardrails to make sure it isn't used maliciously.

Malcolm Harkins

December 12, 2023

5 Min Read
Outlines of heads on a digital background
Source: Brain light via Alamy Stock Photo


The past year has been a whirlwind of artificial intelligence (AI) hype. It was finally met with legislative action when the White House issued an executive order seeking to secure AI's fast-paced future. This was a critical first step toward confronting emerging threats and promoting a safe and secure evolution for a technology that will only grow more popular.

PwC reports nearly all business leaders are "prioritizing at least one initiative related to AI systems in the near term," which underscores just how important it is to protect this innovation. Our ability to seize AI's full potential hinges on ensuring it remains secure, especially if it's handling sensitive business data or national secrets.

The ramifications of ignoring its vulnerabilities would be severe — but with the right protections in place, we can make sure we're countering emerging threats related to financial gain, espionage, and cyberterrorism, because adversaries are already leveraging AI maliciously.

The White House secured commitments from leading AI developers to establish guardrails in how they build and deploy their technology, but there is more work to be done, particularly in addressing post-implementation threats. Congress must go a step further and pass legislation that provides more robust protections against AI's malicious use. Biden has recognized this need himself.

Discussions are already taking place regarding the need for additional AI safeguards. In a recent survey conducted by The Conference Board, 26% of respondents say their company has a policy governing the use of generative AI and 23% say a policy is under development. In another, administered by the Artificial Intelligence Policy Institute, 64% of respondents "support the government creating an organization tasked with auditing AI," and a third, sponsored by IONOS, says 75% of respondents feel that way.

With AI's adoption accelerating, our collective exposure to it does, too. Hundreds of billions of dollars will soon flow into AI systems, according to Goldman Sachs, promising they'll reshape every aspect of our society. Our government has demonstrated its commitment to cybersecurity in the past, and this is the next iteration of that. It must continue to act decisively to prevent these technologies, our national secrets, and our intellectual leadership, from being subject to further risk.

Emerging Threats in an AI World

Attacks on AI will have more severe consequences than those that target individual networks or devices.

Bad actors typically seek financial gain, intellectual property, competitive intelligence, or to manipulate opinions and social divides when they orchestrate a strike. But as AI becomes more prevalent, the harm from those breaches will multiply. Financial crime, espionage, and disinformation will all expand in scope and impact.

Traditional approaches to cyber defense are ill-equipped for AI's complexity. Flaws could be baked into models from conception, without their overseers even noticing. Insiders with specialized skills may go rogue. State-sponsored attackers may even make AI the next frontier for cyber warfare and foreign interference. Cybercriminals will have considerable motivation with so much at stake, making new, tailored security solutions a necessity.

The executive order recognizes these threats and breaks new ground in AI oversight. It promotes safety, protects privacy and civil rights, spurs innovation, and advances the United States' aspirations to become the global leader in AI. However, the government must devise an even more vigorous plan to stay one step ahead of adversaries who will stop at nothing to exploit AI's potential for nefarious means.

The Next Phase of Governing AI

Congress can fill four significant gaps in the executive order when it drafts a bill to promote responsible AI development.

Notably, any legislation should:

  • Require privacy and security-by-design procedures. It's imperative to require protections be built into models from inception, not bolted on later. This is far more effective at minimizing vulnerabilities before release.

  • Mandate run-time and real-time protection. As attacks occur, ongoing monitoring and rapid response are critical to detect anomalies and stop threats before damage spreads. This will require investments in technology specifically built to continuously analyze AI models' inputs and outputs.

  • Ensure agencies have qualified people and proper tools. The executive order notes a research coordination network will advance rapid breakthroughs and development, but all specialized AI security solutions must be implemented by experts who have the resources to manage them effectively. Organizations such as the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) should help in this area. In fact, NIST is already taking a prominent role by establishing the US Artificial Intelligence Safety Institute (USAISI) "to promote development and responsible use of safe and trustworthy AI."

  • Grow the security for AI workforce. Scholarships and other educational initiatives should encourage talented people to seek careers in cybersecurity. The US should also expand visa and naturalization programs to benefit from skilled foreign workers' expertise.

With these additional steps in place, the US will be well-positioned to accomplish exactly what the executive order set out to do: Become a worldwide leader in AI and "unlock the technology's potential to solve some of society's most difficult challenges."

Securing the Trajectory of AI Progress

We're already at an inflection point with AI. Its pace of development is unprecedented, meaning the risks are compounding. As such, the window for preemptive action is closing. The European Union has taken steps toward regulating AI by focusing on privacy, but more rigorous security processes must be established and implemented before AI's vulnerabilities are exploited on a larger scale.

That's why it's crucial for Congress to strengthen protections for AI, but it can't make those decisions in a vacuum. The best path forward is for the government to collaborate closely with the cybersecurity industry as it carries out President Biden's executive order. We must leverage the industry's 30-plus years of expertise to evolve our national cyber controls and protect the advantage that AI gives us. A comprehensive approach to cybersecurity will cement the US at the forefront of AI — and continue to allow innovation to flourish.

About the Author(s)

Malcolm Harkins

Chief Security & Trust Officer, HiddenLayer

Malcolm Harkins is Chief Security and Trust Officer at HiddenLayer. Harkins has more than two decades of experience in information security leadership roles at top technology companies, including Intel, Cylance, and others. He’s written multiple books on risk management, information security, and IT and earned awards from the RSA Conference, ISC2, Computerworld, and the Security Advisor Alliance. Harkins has testified before the Federal Trade Commission and U.S. Senate Committee on Commerce, Science, and Transportation. Harkins is a Fellow with the Institute for Critical Infrastructure Technology, a non-partisan think tank providing cybersecurity expertise to the House of Representatives, Senate, and various federal agencies. He holds a bachelor's degree in economics from the University of California at Irvine and an MBA in finance and accounting from the University of California at Davis. Harkins also previously taught at UCLA's Anderson School of Management and Susquehanna University.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights