Careers & People

1/18/2018
10:30 AM
Jose Nazario
Jose Nazario
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

How to Keep Blue Teams Happy in an Automated Security World

The creativity and intuition of your team members must be developed and nurtured.

In the past year, several high-profile leaders have discussed the threats posed by artificial intelligence (AI), including entrepreneur Elon Musk and Stephen Hawking. While some have mocked the warnings about dangers that seem like Skynet from the Terminator films, recently more thought has gone into the impact of intelligent automation on the workforce.

In cybersecurity, skilled labor shortages have created a need for scaling up the workforce in the face of nonstop threats and attacks. That coupled with the copious amounts of readily available machine-readable data have led to decades of machine learning research. There's significant interest in deploying machine learning into production.

I'd argue that some of the fear is baseless. The hype is quite far-fetched, because generalized AI exists only in fiction or as a Mechanical Turk type of product. We won't have Terminators building themselves to eliminate humanity anytime soon.

But let's explore a plausible future of a cybersecurity world filled with "intelligent automation," which I would describe as the complement of systems (computers, data models, and algorithms) that work under human direction to automate parts of the workflow. In doing so, we see the concept may be drearier than we had imagined.

Foundations for Automation
In cyber defense, automation has been a long time coming. It includes the MITRE CVE effort, which enabled machine-to-machine observation linkages (vulnerability scans, IDS hits) and allowed products utilizing the OASIS OpenC2 and other standards to interoperate nearly seamlessly. Tools including McAfee's ePolicy Orchestrator and those offered by companies such as Phantom Cyber achieve automation typically through specific integrations.

In machine learning, the rise of big data and faster processing has opened new doors. Historically in cybersecurity, research using machine learning focused on getting big results out of as little data as possible. Countless malware classification papers, for example, and IDS systems focused on as few bytes as possible to achieve some high true positive score, but they typically fell flat on their faces in the real world. The rise of big data in cybersecurity could enable a more holistic approach and more accurate results in the real world. At least, I hope so.

What the Future Might Hold
With standards, interoperability, machine learning, and expert judgment and experience now converging, a significant amount of cyber-defense operations is ripe for automation. This includes automation of human knowledge and pattern recognition, which is basically expert judgment built over years of experience. Given the workforce gaps we face, I expect this to get addressed by the market in the coming decade.

Let's assume that all sorts of magic — technological and organizational — happens, that machine learning pans out, and cyber-defense automation gains significant traction. Algorithms will consume a wide variety of data from operational security tools, network and systems performance technologies, and even outside events. During an intrusion, the cyber-defense team will work together via a unified platform to isolate adversaries and prevent future intrusions. Networks will be blocked, software will be patched, and access controls will change in an instant. They will be able to rely partly on algorithms and agents (some personalized) to review statuses and delegate tasks to cyber-defense agents.

What is the role of people in that automated utopia? With machine learning algorithms doing the bulk of the detection work, and even response work, where do the various team members fit in?

I can imagine a scenario where lower tiers of security ops teams do basic alert and event classification work that ultimately trains and updates machine learning models. This layer of the staff, greatly reduced in number but significantly more effective (no fatigue, for example), will exist simply to keep detection algorithms up to date. One layer above would be used to augment those algorithms when they fail to develop firm enough judgments, with team members reviewing evidence to make final decisions.

The upper-escalation tiers, which typically are researchers or a "master layer," will hunt for adversaries, gather evidence, and help create new detection models. This will enable them to scale operations in time and space across their organizations and ultimately arm the next tier with "ninja"-level skills, even if they lack the years of experience typically needed to spot threats.

In some environments, people will remain in the loop to approve and deny actions that machines propose and then complete. These individuals will exist to avert catastrophes (remember War Games?), or even to accept blame, if you're feeling particularly sanguine. But, let's face it, this will be pretty dull work. People probably will be relegated to inspecting the results of automated responses and dealing with legacy systems that can't integrate with the automation framework. Overtime might go down, but workers will be slaves to a machine, which would be demoralizing.

This vision is somewhat dystopian. A career path that demands creativity and insight and rewards it with a great paycheck is likely to see a drop in demand and an entry-level workforce relegated to working while chained to algorithms.

Preparing for an Automated Tomorrow
To avoid that version of the future, companies need to work with their cyber-defense teams. Regardless of automation and machine-assisted decision-making, you still rely on a team of people to execute plans. To keep a satisfied team, it pays to invest in a vision and reality wherein the team uses algorithms to amplify their abilities, not replace them. Machines and algorithms are fallible, just as people are, but humans must do more than just act as backstops for misbehaving technology; their creativity and intuition, which must be developed and nurtured, needs to drive the human-algorithm partnership.

Related Content:

Dr. Jose Nazario is the Director of Security Research at Fastly, and is a recognized expert on cyberthreats to ISPs, network subscribers, and enterprises from cybercrime and malware. He was previously the Research Director for Malware Analysis at Invincea Labs. Before his ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Want Your Daughter to Succeed in Cyber? Call Her John
John De Santis, CEO, HyTrust,  5/16/2018
Don't Roll the Dice When Prioritizing Vulnerability Fixes
Ericka Chickowski, Contributing Writer, Dark Reading,  5/15/2018
Why Enterprises Can't Ignore Third-Party IoT-Related Risks
Charlie Miller, Senior Vice President, The Santa Fe Group,  5/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: "Security through obscurity"
Current Issue
How to Cope with the IT Security Skills Shortage
Most enterprises don't have all the in-house skills they need to meet the rising threat from online attackers. Here are some tips on ways to beat the shortage.
Flash Poll
Surviving the IT Security Skills Shortage
Surviving the IT Security Skills Shortage
Cybersecurity professionals are in high demand -- and short supply. Find out what Dark Reading discovered during their 2017 Security Staffing Survey and get some strategies for getting through the drought. Download the report today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11232
PUBLISHED: 2018-05-18
The etm_setup_aux function in drivers/hwtracing/coresight/coresight-etm-perf.c in the Linux kernel before 4.10.2 allows attackers to cause a denial of service (panic) because a parameter is incorrectly used as a local variable.
CVE-2017-15855
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, the camera application triggers "user-memory-access" issue as the Camera CPP module Linux driver directly accesses the application provided buffer, which resides in u...
CVE-2018-3567
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, a buffer overflow vulnerability exists in WLAN while processing the HTT_T2H_MSG_TYPE_PEER_MAP or HTT_T2H_MSG_TYPE_PEER_UNMAP messages.
CVE-2018-3568
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, in __wlan_hdd_cfg80211_vendor_scan(), a buffer overwrite can potentially occur.
CVE-2018-5827
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, a buffer overflow vulnerability exists in WLAN while processing an extscan hotlist event.