Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

1/18/2018
10:30 AM
Jose Nazario
Jose Nazario
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

How to Keep Blue Teams Happy in an Automated Security World

The creativity and intuition of your team members must be developed and nurtured.

In the past year, several high-profile leaders have discussed the threats posed by artificial intelligence (AI), including entrepreneur Elon Musk and Stephen Hawking. While some have mocked the warnings about dangers that seem like Skynet from the Terminator films, recently more thought has gone into the impact of intelligent automation on the workforce.

In cybersecurity, skilled labor shortages have created a need for scaling up the workforce in the face of nonstop threats and attacks. That coupled with the copious amounts of readily available machine-readable data have led to decades of machine learning research. There's significant interest in deploying machine learning into production.

I'd argue that some of the fear is baseless. The hype is quite far-fetched, because generalized AI exists only in fiction or as a Mechanical Turk type of product. We won't have Terminators building themselves to eliminate humanity anytime soon.

But let's explore a plausible future of a cybersecurity world filled with "intelligent automation," which I would describe as the complement of systems (computers, data models, and algorithms) that work under human direction to automate parts of the workflow. In doing so, we see the concept may be drearier than we had imagined.

Foundations for Automation
In cyber defense, automation has been a long time coming. It includes the MITRE CVE effort, which enabled machine-to-machine observation linkages (vulnerability scans, IDS hits) and allowed products utilizing the OASIS OpenC2 and other standards to interoperate nearly seamlessly. Tools including McAfee's ePolicy Orchestrator and those offered by companies such as Phantom Cyber achieve automation typically through specific integrations.

In machine learning, the rise of big data and faster processing has opened new doors. Historically in cybersecurity, research using machine learning focused on getting big results out of as little data as possible. Countless malware classification papers, for example, and IDS systems focused on as few bytes as possible to achieve some high true positive score, but they typically fell flat on their faces in the real world. The rise of big data in cybersecurity could enable a more holistic approach and more accurate results in the real world. At least, I hope so.

What the Future Might Hold
With standards, interoperability, machine learning, and expert judgment and experience now converging, a significant amount of cyber-defense operations is ripe for automation. This includes automation of human knowledge and pattern recognition, which is basically expert judgment built over years of experience. Given the workforce gaps we face, I expect this to get addressed by the market in the coming decade.

Let's assume that all sorts of magic — technological and organizational — happens, that machine learning pans out, and cyber-defense automation gains significant traction. Algorithms will consume a wide variety of data from operational security tools, network and systems performance technologies, and even outside events. During an intrusion, the cyber-defense team will work together via a unified platform to isolate adversaries and prevent future intrusions. Networks will be blocked, software will be patched, and access controls will change in an instant. They will be able to rely partly on algorithms and agents (some personalized) to review statuses and delegate tasks to cyber-defense agents.

What is the role of people in that automated utopia? With machine learning algorithms doing the bulk of the detection work, and even response work, where do the various team members fit in?

I can imagine a scenario where lower tiers of security ops teams do basic alert and event classification work that ultimately trains and updates machine learning models. This layer of the staff, greatly reduced in number but significantly more effective (no fatigue, for example), will exist simply to keep detection algorithms up to date. One layer above would be used to augment those algorithms when they fail to develop firm enough judgments, with team members reviewing evidence to make final decisions.

The upper-escalation tiers, which typically are researchers or a "master layer," will hunt for adversaries, gather evidence, and help create new detection models. This will enable them to scale operations in time and space across their organizations and ultimately arm the next tier with "ninja"-level skills, even if they lack the years of experience typically needed to spot threats.

In some environments, people will remain in the loop to approve and deny actions that machines propose and then complete. These individuals will exist to avert catastrophes (remember War Games?), or even to accept blame, if you're feeling particularly sanguine. But, let's face it, this will be pretty dull work. People probably will be relegated to inspecting the results of automated responses and dealing with legacy systems that can't integrate with the automation framework. Overtime might go down, but workers will be slaves to a machine, which would be demoralizing.

This vision is somewhat dystopian. A career path that demands creativity and insight and rewards it with a great paycheck is likely to see a drop in demand and an entry-level workforce relegated to working while chained to algorithms.

Preparing for an Automated Tomorrow
To avoid that version of the future, companies need to work with their cyber-defense teams. Regardless of automation and machine-assisted decision-making, you still rely on a team of people to execute plans. To keep a satisfied team, it pays to invest in a vision and reality wherein the team uses algorithms to amplify their abilities, not replace them. Machines and algorithms are fallible, just as people are, but humans must do more than just act as backstops for misbehaving technology; their creativity and intuition, which must be developed and nurtured, needs to drive the human-algorithm partnership.

Related Content:

Dr. Jose Nazario is the Director of Security Research at Fastly, and is a recognized expert on cyberthreats to ISPs, network subscribers, and enterprises from cybercrime and malware. He was previously the Research Director for Malware Analysis at Invincea Labs. Before his ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
A Realistic Threat Model for the Masses
Lysa Myers, Security Researcher, ESET,  10/9/2019
USB Drive Security Still Lags
Dark Reading Staff 10/9/2019
Virginia a Hot Spot For Cybersecurity Jobs
Jai Vijayan, Contributing Writer,  10/9/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
2019 Online Malware and Threats
2019 Online Malware and Threats
As cyberattacks become more frequent and more sophisticated, enterprise security teams are under unprecedented pressure to respond. Is your organization ready?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-17612
PUBLISHED: 2019-10-15
An issue was discovered in 74CMS v5.2.8. There is a SQL Injection generated by the _list method in the Common/Controller/BackendController.class.php file via the index.php?m=Admin&c=Ad&a=category sort parameter.
CVE-2019-17613
PUBLISHED: 2019-10-15
qibosoft 7 allows remote code execution because do/jf.php makes eval calls. The attacker can use the Point Introduction Management feature to supply PHP code to be evaluated. Alternatively, the attacker can access admin/index.php?lfj=jfadmin&action=addjf via CSRF, as demonstrated by a payload in...
CVE-2019-17395
PUBLISHED: 2019-10-15
In the Rapid Gator application 0.7.1 for Android, the username and password are stored in the log during authentication, and may be available to attackers via logcat.
CVE-2019-17602
PUBLISHED: 2019-10-15
An issue was discovered in Zoho ManageEngine OpManager before 12.4 build 124089. The OPMDeviceDetailsServlet servlet is prone to SQL injection. Depending on the configuration, this vulnerability could be exploited unauthenticated or authenticated.
CVE-2019-17394
PUBLISHED: 2019-10-15
In the Seesaw Parent and Family application 6.2.5 for Android, the username and password are stored in the log during authentication, and may be available to attackers via logcat.