Would you let a child drive a car? I imagine that most people would think that is reckless and dangerous. However, what if the car were affixed to guardrails that limited its movement so that it can only move within certain bounds inside an amusement park?
Allowing the current generation of artificial intelligence (AI) technologies to make decisions and take actions on our behalf is like having a 5-year-old take a car out for a joyride, but without the appropriate guardrails to prevent a terrible accident or an incident with potentially irreversible consequences.
Security professionals are often led to believe that AI, machine learning, and automation will revolutionize our security practices and allow us to automate security, perhaps even achieve a state of "autonomic security." But what does that really mean and what unintended consequences might we encounter? What guardrails should we consider that are commensurate with the "age" of AI?
To understand what we are getting ourselves into and the appropriate guardrails for security use cases, let us consider the following three questions.
- How do AI/ML, decision-making, and automation relate to one another?
- How mature are our AI/ML and automated decision-making capabilities?
- How mature do they need to be for security?
To answer each of these questions, we can examine a combination of three frameworks: the OODA loop, DARPA's Three Waves of AI, and Classical Education.
The OODA loop stands for Observe, Orient, Decide, Act, but let's use a slightly modified version:
Within this framework, AI/ML (sense-making) is distinct from automation (acting) and connected by a decision-making function. Autonomic means involuntary or unconscious. In the context of this framework, autonomic could mean either skipping sense-making and decision-making (e.g., involuntary stimulus-response reflexes) or skipping just decision-making (e.g., unconscious breathing). In either case, something that is autonomic skips decision-making.
DARPA's Three Waves of AI
DARPA's framework defines the advancement of AI through several waves (describe, categorize, and explain). The first wave takes handcrafted knowledge of experts and codifies it into software to provide deterministic outcomes. The second wave involves statistical learning systems, enabling pattern recognition and self-driving cars. This wave produces results that are statistically impressive but also individually unreliable.
For the errant results, these systems have minimal reasoning capabilities and thus they cannot explain why it produces incorrect results from its sense-making. At DARPA's third wave, AI is able to provide explanatory models that enable us to understand how and why any sense-making mistakes are made. This understanding helps increase our trust in its sense-making capabilities.
According to DARPA, we haven't reached this third wave yet. Current ML capabilities can give us answers that are often correct, but they're not mature enough to tell us how or why they arrived at their answers when they're incorrect. Mistakes in security systems leveraging AI can have consequential outcomes, so root cause analysis is critically important to understand the reason behind these failures. However, we do not get any explanation of the "how" and "why" with results produced by the second wave.
Our third framework is the Classical Education Trivium, which describes three learning stages in child development. At the elementary school stage, children focus on memorizing facts and learning about structures and rules. At the dialectic stage in middle school, they focus on connecting related topics and explaining how and why. Finally, in the rhetoric stage of high school, students integrate subjects, reason logically, and persuade others.
If we expect children to be able to explain how and why at middle school (somewhere around the ages of 10 to 13), that suggests that the current generation of AI, which lacks the ability to explain, isn't past the elementary stage! It has the cognitive maturity of a child that is less than 10 years old (and some suggest significantly younger.)
With autonomic security, we're skipping decision-making. But if we were to have the current generation of AI do the decision-making for us, we must recognize that we're dealing with a system that has the decision-making capacity of an immature child. Are we ready to let these systems make decisions on our behalf without proper guardrails?
Need for Guardrails
The march toward automated and autonomic security will undoubtedly continue. However, with some guardrails, we can minimize the carnage that would otherwise ensue. Here are points for consideration:
- Sensor diversity: Ensure sensor sources are trustworthy and reliable, based on multiple sources of truth.
- Bounded conditions: Ensure decisions are highly deterministic and narrowly scoped.
- Established thresholds: Know when the negative repercussions of action might exceed the costs of inaction when something goes wrong.
- Algorithmic integrity: Ensure that the entire process and all assumptions are well-documented and understood by the operators.
- Brakes and reverse gear: Have a kill switch ready if it goes beyond the scope and make the action immediately reversible.
- Authorities and accountabilities: Have pre-established authorities for taking action and accountabilities for outcomes.
Allowing a child to drive a car without proper guardrails would be irresponsible. Let's make sure that we have well thought-out guardrails for AI-driven security before we let our immature machines take the wheel.