AI and ML are important SecOps tools, but human involvement is still required.

Nash Borges, VP of Engineering and Data Science, Secureworks

June 1, 2022

5 Min Read
Artificial intelligence
Source: marcos alvarado via Alamy Stock Photo

Artificial intelligence (AI) can be used to enhance the efficiency and scale of SecOps teams, but it will not solve all your cybersecurity needs without the need for some human involvement — at least, not today.

Most commercial AI successes have been associated with supervised machine learning (ML) techniques specifically tuned for prediction tasks that yield business value. These use cases for ML, such as spoken language understanding for your smart-home assistant and object recognition for self-driving cars, make use of vast amounts of labeled data and computation required to train complex deep learning models. They also focus on solving problems that barely change. This is in contrast to cybersecurity, where we rarely have the millions of examples of malicious activity needed to train deep learning models, and we face intelligent adversaries that frequently change their tactics to try to outmaneuver our latest detection capabilities, including those using ML.

In addition, the digital exhaust from human behavior in enterprise environments is extremely hard to predict. Anomalies in these systems are common and very rarely represent malicious threat actor behavior. It is therefore unreasonable to expect that unsupervised anomaly detection can be used to learn about an enterprise environment’s normal behavior and be able to generate meaningful alerts about malicious activity without creating false alarms on unusual but benign events.

Finally, the degree of data imbalance in threat detection is unlike many other use cases for ML. Imagine for a moment that you are a midsize to large enterprise collecting 1 billion potentially security-relevant telemetry events per day and expect to find one incident worth seriously investigating. Nobody wants to lose the ransomware lottery and have their business grind to a halt with the potential for even worse reputational damage by missing that one security incident. However, if you build an ML-based threat detector processing each event by itself that is 99.9% accurate, you would be searching for that one true positive in a sea of 1 million false positives. Conquering this data imbalance requires significant expertise and a multipronged detection strategy.

Despite these challenges, there are ways for SecOps teams to leverage the technical power of AI/ML to gain operational efficiencies. The following principles should be considered when doing so.

1. Symbiotic Humans and Machines Work Better Together

Consider ML a complement to human intelligence rather than a substitute for it. In the context of complex systems, especially when combatting intelligent adversaries that adapt quickly, automation will deliver the greatest value with active learning at its core. Humans should regularly review the results of ML-based systems, provide feedback, add additional examples of new malicious behaviors, retune the models, and constantly iterate. Anyone who has ever had to face an intelligent adversary, whether it be in cyberspace or in combat, should be familiar with the OODA loop, developed by US Air Force Colonel John Boyd. It has many similarities to active learning techniques that can be exceptionally useful in ensuring that automatable decisions made in each loop are using the best insights, optimizing the utility of manual analysis performed in some loops, and scaling it to assist in processing more loops than humanly possible.

2. Pick The Right Tool for the Job

You do not have to become an AI expert to make good AI-related decisions for your team, but you should be reasonably informed about the basics to ensure that you are picking the right tool for the job.

First, it is important to know the difference between anomalous and malicious behaviors because they are rarely the same and require very different techniques when it comes to detection. The former is easy to discover with unsupervised anomaly detection that does not require labeled training data, but the latter requires supervised learning that typically requires many historical examples.

Second, alerts with a high signal-to-noise ratio are critical for SecOps teams, and you need to fully understand the downstream effects of any probabilistic system that will not be 100% accurate.

Finally, while nearly every ML technique has been applied to cybersecurity, it is still important to have thousands of signatures from threat intelligence that operate like a minefield of trip wires. When constantly tuned by an expert team of security researchers, signatures provide a critical baseline for detecting known threats that needs to be a part of every security program for the foreseeable future.

3. Use AI Where It Can Be Most Successful

It is ironic that many cybersecurity professionals who might trust AI to drive their car are skeptical about AI executing remedial containment actions. Automated action, after all, is one of the best ways to make your SecOps team more efficient. Automation frees the creative human mind from getting bogged down in time-consuming operational tasks — which are right in AI's wheelhouse. ML is useful in detecting advanced threats, especially when you can aggregate evidence, and it is also useful when prioritizing alerts from a variety of detectors that may come from different systems. AI can automate low-risk containment actions, such as quarantining suspicious files or requiring users to re-authenticate, which can dramatically increase your SecOps efficiency and lower your cyber-risk.

With all that said, AI/ML cannot be your only cybersecurity strategy — at least, not in 2022. When you are searching for important needles in immense haystacks, the most effective strategy still comes from pairing machine intelligence with the human intuition of expert analysts.

About the Author(s)

Nash Borges

VP of Engineering and Data Science, Secureworks

Nash Borges is Vice President of Engineering and Data Science at Secureworks, where he leads the diverse teams of engineers, data scientists, and researchers that developed the Secureworks cloud-native, enterprise security platform — Taegis™ — from the ground up. In his role, Nash led the development of the Secureworks portfolio of SaaS products built on the Taegis platform, which includes Taegis™ XDR and Taegis™ VDR. Less than three years after launch, Taegis XDR and VDR is now responsible for $165M in ARR.

Nash started his career in cybersecurity at a young age. He was recruited in high school to work for the National Security Agency (NSA), where he began his career building and operating big data systems at the intersection of machine learning and human intuition under the pressure of intelligence operations and international crises, where he served in war zones multiple times. Nash earned his Ph.D. in Electrical and Computer Engineering from The Johns Hopkins University, focusing on machine learning (ML) and anomaly detection, and is currently enrolled in Wharton’s CTO Program. He has filed multiple patents, most recently for Secureworks’ ML-based Hands-on-Keyboard detector, which finds threat actors that are “living off the land” using system administration tools that may go unnoticed by other endpoint technologies

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights