Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
News, news analysis, and commentary on the latest trends in cybersecurity technology.
The Next Generation of Threat Detection Will Require Both Human and Machine Expertise
To be truly effective, threat detection and response need to combine the strengths of people and technology.
July 14, 2022
5 Min Read
Source: Westend61 GmbH via Alamy Stock Photo
There is a debate in the world of cybersecurity about whether to use human or machine expertise. However, this is a false dichotomy: Truly effective threat detection and response need both kinds of expertise working in tandem.
It will be years before machines completely replace the humans who perform typical detection and response tasks. What we predict for the meantime is a symbiotic relationship between humans and machines. The combination means that detection of and response to threats can be faster and more intelligent. It leaves humans to focus on what humans do best, while artificial intelligence (AI) shines at tasks better suited for machine processing.
Threat detection is very much an adversarial problem. Attacks rely on stealth, which often makes detection difficult, especially among billions of data points. Technologies we've relied on for the past 20 years are not sufficient to combat threats or sift through the "noise" to find the "signal." Yet skilled humans can find threats that rule-based systems cannot identify.
Any system that uses AI for the next generation of threat detection will need to harness the power of both human and machine expertise and be able to learn and adapt based on human feedback.
Perfection Is Not the Goal, Human Performance Is
There's a misconception that AI can't really make decisions, and we need vastly experienced human experts with irreproducible human intuition.
Looking at this through the lens of the classic Turing test, we asked: Can a machine outperform a security analyst in 80% of the work currently done by humans? If the answer is yes, imagine the productivity gains and efficiency for security operations.
We see reason for optimism here. Forty years ago, a chess engine beating a human was unthinkable, but the problem was settled in half that time. Just 10 years ago, automated audio transcription was poor, and humans were better at the task. Now machines can transcribe at least as well as humans.
Teaming Up for the Best Outcome
Most companies can't hire enough staff to deal with all of the security alerts. The ideal solution to this talent crunch employs intelligent automation to assist security analysts, incident responders, and threat hunters. There are three main ways to successfully apply security automation:
1. Alert triage. Turning millions of alerts and thousands of events into a handful of actionable cases with context about what happened and why helps prioritize tasks for human workers.
2. Incident response. Automating repetitive tasks reduces the mean time to detect (MTTD) and mean time to respond (MTTR). This frees up human analysts to respond to more important threats and make more effective, immediate decisions.
3. Threat detection. Threat detection is an offensive game, focused on identifying and correlating new threats across the network, different endpoints, and applications while prioritizing actions over alerts. Of the three, this is also the main area for improvement: How can we apply automation more effectively to threat detection?
Automating Threat Detection
There are two kinds of automation. The first is replicating simple human actions to build into an AI-driven process. Threat detection, however, is essentially a decision-making process.
The second kind of automation requires us to determine which incidents genuinely require escalation by human security analysts. The current quality of automation technology is clear — in some security operations, machines exceed human accuracy. The goal is to build a decision engine that makes decisions as well as human beings, if not better.
But how can we trust that machine decision-making equals or supersedes human decision-making? Simple. Look at the data!
Automation may mark an alert as an incident that a human security analyst later closes without escalation. Ask them why, and the analyst will walk you through their thought process. Those "whys" are the basis of what we call a factor. Factors that are not immediately obvious may play an important part in the final decision.
The more factors we gather, the sharper the accuracy of both human and machine expertise. Meanwhile, we can also reduce false positives. Every difference between human and machine may uncover additional factors, or human analysts may combine factors in different ways than the automated system.
Improving the Decision Engine
A rules engine is limited to modeling just the "bad" qualities or behavior we observe in a pool of data. As a result, it can only identify and respond to incidents that fall within those criteria. In contrast, a decision engine teaches the machine both "bad" and "good" and enables the model to progressively learn.
Mimicking a human's approach to learning and replicating it delivers the same decision, only automated. Hundreds of decisions can be made in just one minute, and resolution time plummets. Instead of running through 20 routine alerts, human analysts could focus their time and energy on one or two actionable cases.
Triage presents thousands of alerts a day. But in threat hunting, the problem is three or four orders of magnitude larger. Hundreds of millions of events mean we're looking for the proverbial needle in a haystack. So how do we apply the same factor analysis approach to threat hunting as we do to alert triage?
Factors can be mapped to each of these hundreds of millions of events with feature engineering. If we extract a given factor, we can apply transformations and reduce the number of different values the factor has (its dimensionality), which is especially useful when dealing with 100 different values or more.
This allows us to map each factor to a score and combine them for a final score, which the AI can use to make decisions. But because there will always be differences in decisions made by human analysts and decision engines, the AI must be able to accept human feedback.
This is supervised algorithmic machine learning in action. Humans provide feedback via labeling, and this input "educates" the system to build a model. It's even possible to build an unsupervised system for tasks that fit it. To work effectively, AI needs to be explainable, customizable, and adaptable.
When we build a decision engine with human expertise and incorporate automation wherever possible, this is what the next generation of SOC technology will look like.
About the Author(s)
CEO and Co-Founder, LogicHub
Kumar Saurabh is the CEO and co-founder of security intelligence automation platform LogicHub. Kumar has 15 years of experience in the enterprise security and log management space leading product development efforts at ArcSight and SumoLogic, which he left to co-found LogicHub.
You May Also Like
Your Everywhere Security guide: Four steps to stop cyberattacksFeb 27, 2024
Your Everywhere Security Guide: 4 Steps to Stop CyberattacksFeb 27, 2024
API Security: Protecting Your Application's Attack SurfaceFeb 29, 2024
API Security: Protecting Your Application's Attack SurfaceFeb 29, 2024
Securing the Software Development Life Cycle from Start to FinishMar 06, 2024
Latest Articles in DR Technology
Apple Beefs Up iMessage With Quantum-Resistant EncryptionFeb 23, 2024|5 Min Read
Insurers Use Claims Data to Recommend Cybersecurity TechnologiesFeb 22, 2024|4 Min Read
AI-Generated Patches Could Ease Developer, Operations WorkloadFeb 20, 2024|5 Min Read
What Using Security to Regulate AI Chips Could Look LikeFeb 16, 2024|3 Min Read