Building Guardrails for Autonomic Security

AI's potential for automating security has promise, but there are miles to go in establishing decision-making boundaries.

Sounil Yu, CISO and Head of Research, JupiterOne

July 18, 2022

4 Min Read
Source: marcos alvarado via Alamy Stock Photo

Would you let a child drive a car? I imagine that most people would think that is reckless and dangerous. However, what if the car were affixed to guardrails that limited its movement so that it can only move within certain bounds inside an amusement park?

Allowing the current generation of artificial intelligence (AI) technologies to make decisions and take actions on our behalf is like having a 5-year-old take a car out for a joyride, but without the appropriate guardrails to prevent a terrible accident or an incident with potentially irreversible consequences.

Security professionals are often led to believe that AI, machine learning, and automation will revolutionize our security practices and allow us to automate security, perhaps even achieve a state of "autonomic security." But what does that really mean and what unintended consequences might we encounter? What guardrails should we consider that are commensurate with the "age" of AI?

To understand what we are getting ourselves into and the appropriate guardrails for security use cases, let us consider the following three questions.

  • How do AI/ML, decision-making, and automation relate to one another?

  • How mature are our AI/ML and automated decision-making capabilities?

  • How mature do they need to be for security?

To answer each of these questions, we can examine a combination of three frameworks: the OODA loop, DARPA's Three Waves of AI, and Classical Education.

OODA Loop
The OODA loop stands for Observe, Orient, Decide, Act, but let's use a slightly modified version:

  • Sensing

  • Sense-making

  • Decision-making

  • Acting

Within this framework, AI/ML (sense-making) is distinct from automation (acting) and connected by a decision-making function. Autonomic means involuntary or unconscious. In the context of this framework, autonomic could mean either skipping sense-making and decision-making (e.g., involuntary stimulus-response reflexes) or skipping just decision-making (e.g., unconscious breathing). In either case, something that is autonomic skips decision-making.

DARPA's Three Waves of AI
DARPA's framework defines the advancement of AI through several waves (describe, categorize, and explain). The first wave takes handcrafted knowledge of experts and codifies it into software to provide deterministic outcomes. The second wave involves statistical learning systems, enabling pattern recognition and self-driving cars. This wave produces results that are statistically impressive but also individually unreliable.

For the errant results, these systems have minimal reasoning capabilities and thus they cannot explain why it produces incorrect results from its sense-making. At DARPA's third wave, AI is able to provide explanatory models that enable us to understand how and why any sense-making mistakes are made. This understanding helps increase our trust in its sense-making capabilities.

According to DARPA, we haven't reached this third wave yet. Current ML capabilities can give us answers that are often correct, but they're not mature enough to tell us how or why they arrived at their answers when they're incorrect. Mistakes in security systems leveraging AI can have consequential outcomes, so root cause analysis is critically important to understand the reason behind these failures. However, we do not get any explanation of the "how" and "why" with results produced by the second wave.

Classical Education
Our third framework is the Classical Education Trivium, which describes three learning stages in child development. At the elementary school stage, children focus on memorizing facts and learning about structures and rules. At the dialectic stage in middle school, they focus on connecting related topics and explaining how and why. Finally, in the rhetoric stage of high school, students integrate subjects, reason logically, and persuade others.

If we expect children to be able to explain how and why at middle school (somewhere around the ages of 10 to 13), that suggests that the current generation of AI, which lacks the ability to explain, isn't past the elementary stage! It has the cognitive maturity of a child that is less than 10 years old (and some suggest significantly younger.)

With autonomic security, we're skipping decision-making. But if we were to have the current generation of AI do the decision-making for us, we must recognize that we're dealing with a system that has the decision-making capacity of an immature child. Are we ready to let these systems make decisions on our behalf without proper guardrails?

Need for Guardrails
The march toward automated and autonomic security will undoubtedly continue. However, with some guardrails, we can minimize the carnage that would otherwise ensue. Here are points for consideration:

  • Sensor diversity: Ensure sensor sources are trustworthy and reliable, based on multiple sources of truth.

  • Bounded conditions: Ensure decisions are highly deterministic and narrowly scoped.

  • Established thresholds: Know when the negative repercussions of action might exceed the costs of inaction when something goes wrong.

  • Algorithmic integrity: Ensure that the entire process and all assumptions are well-documented and understood by the operators.

  • Brakes and reverse gear: Have a kill switch ready if it goes beyond the scope and make the action immediately reversible.

  • Authorities and accountabilities: Have pre-established authorities for taking action and accountabilities for outcomes.

Allowing a child to drive a car without proper guardrails would be irresponsible. Let's make sure that we have well thought-out guardrails for AI-driven security before we let our immature machines take the wheel.

About the Author(s)

Sounil Yu

CISO and Head of Research, JupiterOne

Sounil Yu is the current CISO and head of research at JupiterOne, a cyber asset management platform. He was the former CISO-in-Residence for YL Ventures, where he worked closely with aspiring entrepreneurs to validate their startup ideas and develop approaches for hard problems in cybersecurity. Prior to that, Yu served at Bank of America as their Chief Security Scientist and at Booz Allen Hamilton where he helped improve security at several Fortune 100 companies and government agencies. He is the creator of the Cyber Defense Matrix and the D.I.E. Triad, which are helping to reshape how the industry thinks about and approaches cybersecurity. He serves on the Board of the FAIR Institute and SCVX; co-chairs Art into Science: A Conference on Defense; volunteers for Project N95; contributes as a visiting National Security Institute fellow at GMU's Scalia Law School; and advises many security startups.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights