Sponsored By

CISA's Road Map: Charting a Course for Trustworthy AI Development

The agency aims to build a more robust cybersecurity posture for the nation.

Stu Sjouwerman

January 19, 2024

4 Min Read
CISA logo
Source: GK Images via Alamy Stock Photo

COMMENTARY
Rapid adoption of artificial intelligence technology has sparked serious cyber concerns among AI experts, policymakers, tech industry titans, nations, and world leaders. In response, the Cybersecurity and Infrastructure Agency (CISA) unveiled its 2023–2024 "CISA Roadmap for Artificial Intelligence," with the aim of creating secure and trustworthy development and use of AI, as directed by White House Executive Order 14110.

CISA released its "Strategic Plan" for 2023–2025 with the mission of delivering a secure and resilient infrastructure for the American public. The AI road map is an adaptation of the same four goals highlighted in the strategic plan:

  • Cyber defense: While AI systems can help organizations improve their defenses against emerging as well as advanced cyber threats, AI-based software systems pose a number of risks that necessitate a robust AI defense. The road map aims to promote the beneficial use of AI and, at the same time, protect the nation's systems from AI-based threats.

  • Risk reduction and resilience: Critical infrastructure organizations are increasingly using AI systems to maintain and strengthen their own cyber resilience. With its road map, CISA aims to promote a responsible and risk-aware adoption of AI-based software systems that are "secure by design" — where security is implemented in the design phase of a product's development lifecycle so that exploitable flaws can be reduced at the source.

  • Operational collaboration: As AI technology proliferates, US citizens and critical infrastructure sectors may be subjected to targeted threats, which may require information sharing and coordinated responses from organizations, law enforcement agencies, and international partners. CISA plans to develop a framework that helps improve alignment and coordination across relevant stakeholders.

  • Agency unification: CISA aims to unify and integrate AI software systems across the agency, which will help it use AI systems more coherently. CISA also plans to recruit and develop a workforce that is capable of optimally harnessing AI systems.

The AI Road Map Puts Security Onus on AI Developers, Not AI Consumers

Historically, AI software manufactures have resisted building products that are secure by design, putting the burden of security on AI consumers. The CISA AI road map mandates that AI system manufacturers adopt secure by design principles throughout the entire development lifecycle. This includes making secure by design a top business priority, taking ownership of security outcomes for customers, and leading product development with radical transparency and accountability.

CISA Plans to Implement Five Lines of Effort to Achieve Its Stated Goals

CISA has identified five lines of effort to unify and accelerate the above road map:

  • Responsible use of AI: CISA intends to deploy AI-enabled software tools to bolster cyber defenses and support critical infrastructure. All systems and tools will pass through a rigorous selection process where CISA will ensure that AI-related systems are responsible, secure, ethical, and safe to use. CISA will also deploy robust governance processes that will not only be consistent with federal procurement processes, applicable laws and policies, privacy, and civil rights and liberties, but also adopt an approach for continuous assessment of AI models while reviewing IT security practices to securely integrate the technology.

  • Assure AI systems: To build more secure and resilient AI software development and implementation, CISA plans to champion secure-by-design initiatives, develop security best practices and guidance for a broad range of stakeholders (e.g., federal civilian government agencies, private sector companies, state, local, tribal, and territorial governments) and drive adoption of strong vulnerability management practices, specifying a vulnerability disclosure process and providing guidance for security testing and red teaming exercises for AI systems.

  • Protect critical infrastructure from malicious use of AI: CISA will partner with government agencies and industry partners such as the Joint Cyber Defense Collaborative to develop, test, and evaluate AI tools and collaborate on evolving AI threats. CISA will publish materials to raise awareness of emerging risks and will also evaluate risk management methods to determine the appropriate analytical framework for the assessment and treatment of AI risks.

  • Collaborate with interagency, international partners and the public: To raise awareness, to share threat information, and to improve incident response and investigative capabilities, CISA plans to foster collaborative approaches such as AI working groups, attending or participating in interagency meetings, and closely coordinating with Department of Homeland Security entities. CISA will work across the interagency to ensure its policies and strategies align with the whole-of-government approach and will engage international partners to encourage the adoption of international best practices for secure AI.

  • Expand AI expertise in the workforce: Human vigilance, oversight, and intuition are always needed to detect AI- and non-AI-based cyber threats and to ensure AI systems are free from errors, biases, and manipulation. Human intuition and vigilance can only be strengthened with robust security awareness initiatives. This is why CISA plans to continuously educate the workforce on AI software systems and techniques, recruit employees with AI expertise, and conduct security awareness training programs that should include situational exercises to ensure employees understand the legal, ethical, and policy aspects of AI-based systems, over and above the technical aspects. 

Through the initiatives outlined in the CISA road map, the agency hopes to build a more robust cybersecurity posture for the nation, protect critical infrastructure from malicious use of AI, and prioritize security as a core business requirement in AI-based tools and systems.

About the Author(s)

Stu Sjouwerman

Founder & CEO, KnowBe4, Inc.

Stu Sjouwerman is founder and CEO of KnowBe4, a provider of security awareness training and simulated phishing platforms, with over 56,000 customers and more than 60 million users. He was co-founder of Sunbelt Software, the anti-malware software company acquired in 2010. He is the author of four books, including Cyberheist: The Biggest Financial Threat Facing American Businesses.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights