Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

1/18/2018
10:30 AM
Jose Nazario
Jose Nazario
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

How to Keep Blue Teams Happy in an Automated Security World

The creativity and intuition of your team members must be developed and nurtured.

In the past year, several high-profile leaders have discussed the threats posed by artificial intelligence (AI), including entrepreneur Elon Musk and Stephen Hawking. While some have mocked the warnings about dangers that seem like Skynet from the Terminator films, recently more thought has gone into the impact of intelligent automation on the workforce.

In cybersecurity, skilled labor shortages have created a need for scaling up the workforce in the face of nonstop threats and attacks. That coupled with the copious amounts of readily available machine-readable data have led to decades of machine learning research. There's significant interest in deploying machine learning into production.

I'd argue that some of the fear is baseless. The hype is quite far-fetched, because generalized AI exists only in fiction or as a Mechanical Turk type of product. We won't have Terminators building themselves to eliminate humanity anytime soon.

But let's explore a plausible future of a cybersecurity world filled with "intelligent automation," which I would describe as the complement of systems (computers, data models, and algorithms) that work under human direction to automate parts of the workflow. In doing so, we see the concept may be drearier than we had imagined.

Foundations for Automation
In cyber defense, automation has been a long time coming. It includes the MITRE CVE effort, which enabled machine-to-machine observation linkages (vulnerability scans, IDS hits) and allowed products utilizing the OASIS OpenC2 and other standards to interoperate nearly seamlessly. Tools including McAfee's ePolicy Orchestrator and those offered by companies such as Phantom Cyber achieve automation typically through specific integrations.

In machine learning, the rise of big data and faster processing has opened new doors. Historically in cybersecurity, research using machine learning focused on getting big results out of as little data as possible. Countless malware classification papers, for example, and IDS systems focused on as few bytes as possible to achieve some high true positive score, but they typically fell flat on their faces in the real world. The rise of big data in cybersecurity could enable a more holistic approach and more accurate results in the real world. At least, I hope so.

What the Future Might Hold
With standards, interoperability, machine learning, and expert judgment and experience now converging, a significant amount of cyber-defense operations is ripe for automation. This includes automation of human knowledge and pattern recognition, which is basically expert judgment built over years of experience. Given the workforce gaps we face, I expect this to get addressed by the market in the coming decade.

Let's assume that all sorts of magic — technological and organizational — happens, that machine learning pans out, and cyber-defense automation gains significant traction. Algorithms will consume a wide variety of data from operational security tools, network and systems performance technologies, and even outside events. During an intrusion, the cyber-defense team will work together via a unified platform to isolate adversaries and prevent future intrusions. Networks will be blocked, software will be patched, and access controls will change in an instant. They will be able to rely partly on algorithms and agents (some personalized) to review statuses and delegate tasks to cyber-defense agents.

What is the role of people in that automated utopia? With machine learning algorithms doing the bulk of the detection work, and even response work, where do the various team members fit in?

I can imagine a scenario where lower tiers of security ops teams do basic alert and event classification work that ultimately trains and updates machine learning models. This layer of the staff, greatly reduced in number but significantly more effective (no fatigue, for example), will exist simply to keep detection algorithms up to date. One layer above would be used to augment those algorithms when they fail to develop firm enough judgments, with team members reviewing evidence to make final decisions.

The upper-escalation tiers, which typically are researchers or a "master layer," will hunt for adversaries, gather evidence, and help create new detection models. This will enable them to scale operations in time and space across their organizations and ultimately arm the next tier with "ninja"-level skills, even if they lack the years of experience typically needed to spot threats.

In some environments, people will remain in the loop to approve and deny actions that machines propose and then complete. These individuals will exist to avert catastrophes (remember War Games?), or even to accept blame, if you're feeling particularly sanguine. But, let's face it, this will be pretty dull work. People probably will be relegated to inspecting the results of automated responses and dealing with legacy systems that can't integrate with the automation framework. Overtime might go down, but workers will be slaves to a machine, which would be demoralizing.

This vision is somewhat dystopian. A career path that demands creativity and insight and rewards it with a great paycheck is likely to see a drop in demand and an entry-level workforce relegated to working while chained to algorithms.

Preparing for an Automated Tomorrow
To avoid that version of the future, companies need to work with their cyber-defense teams. Regardless of automation and machine-assisted decision-making, you still rely on a team of people to execute plans. To keep a satisfied team, it pays to invest in a vision and reality wherein the team uses algorithms to amplify their abilities, not replace them. Machines and algorithms are fallible, just as people are, but humans must do more than just act as backstops for misbehaving technology; their creativity and intuition, which must be developed and nurtured, needs to drive the human-algorithm partnership.

Related Content:

Dr. Jose Nazario is the Director of Security Research at Fastly, and is a recognized expert on cyberthreats to ISPs, network subscribers, and enterprises from cybercrime and malware. He was previously the Research Director for Malware Analysis at Invincea Labs. Before his ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Attackers' Costs Increasing as Businesses Focus on Security
Robert Lemos, Contributing Writer,  11/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19071
PUBLISHED: 2019-11-18
A memory leak in the rsi_send_beacon() function in drivers/net/wireless/rsi/rsi_91x_mgmt.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption) by triggering rsi_prepare_beacon() failures, aka CID-d563131ef23c.
CVE-2019-19072
PUBLISHED: 2019-11-18
A memory leak in the predicate_parse() function in kernel/trace/trace_events_filter.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption), aka CID-96c5c6e6a5b6.
CVE-2019-19073
PUBLISHED: 2019-11-18
Memory leaks in drivers/net/wireless/ath/ath9k/htc_hst.c in the Linux kernel through 5.3.11 allow attackers to cause a denial of service (memory consumption) by triggering wait_for_completion_timeout() failures. This affects the htc_config_pipe_credits() function, the htc_setup_complete() function, ...
CVE-2019-19074
PUBLISHED: 2019-11-18
A memory leak in the ath9k_wmi_cmd() function in drivers/net/wireless/ath/ath9k/wmi.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption), aka CID-728c1e2a05e4.
CVE-2019-19075
PUBLISHED: 2019-11-18
A memory leak in the ca8210_probe() function in drivers/net/ieee802154/ca8210.c in the Linux kernel before 5.3.8 allows attackers to cause a denial of service (memory consumption) by triggering ca8210_get_platform_data() failures, aka CID-6402939ec86e.