Attacks/Breaches

12/13/2017
10:30 AM
Gary Golomb
Gary Golomb
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

Automation Could Be Widening the Cybersecurity Skills Gap

Sticking workers with tedious jobs that AI can't do leads to burnout, but there is a way to achieve balance.

According to Cybersecurity Ventures, the cybersecurity skills shortage is now expected to hit 3.5 million positions by 2021 — a huge jump from current estimates of 1 million job openings.

To help compensate for the growing shortage of talent, the cybersecurity industry is embracing artificial intelligence and automation to fill the gap. But can automation actually make the skills gap even greater? Unfortunately, yes — but security can still find a balance.

The Leftover Principle of Automation
The concept of mechanizing human tasks to drive efficiency has been studied since the advent of industrial automation. The primary goal is to automate as much as possible and thus eliminate human decision making in the process because human decisions can be the most frequent source of error in a given process. Any task not assigned to machines is "left over" for humans to carry out.

The problem with this theory, especially in cybersecurity, is that only very well-understood (relatively simple) processes can be automated, meaning the tasks left for security teams are the hard tasks that can't be automated. These difficult tasks require security professionals who have experience and deep domain knowledge. 

This is exacerbating the vicious cycle of security analyst burnout we currently face: 

  • Tasks that provide a sense of completion/satisfaction are automated.
  • Security analysts are increasingly asked to work on tedious, arduous tasks that lead to burnout.
  • Analysts leave to find greener pastures, leaving the security operations center shorthanded.
  • Company struggles to find talent to fill the gap.
  • When security management finds someone to hire, they give the new employees tedious, arduous tasks that lead to burnout.
  • Wash. Rinse. Repeat.

Lessons from the '90s and the IT Community
This isn't the first time this phenomenon has reared its head in the technology world. We saw a similar cycle in the IT/sysadmin world 25+ years ago. The sysadmin of the '90s was near omnipotent when it came to domain knowledge of technology and IT systems. This was driven by need — IT professionals had to be able to fix every problem across technology infrastructure, and that infrastructure was nowhere near as reliable and interoperable as it is today.

As technology advanced, this need for all-knowing IT admins lessened, driven by technology improvements. This necessarily lessens the experience and accumulated knowledge gained from fixing systems and making sure they work together.

Today's IT professionals no longer implicitly acquire deep domain expertise on IT infrastructure in the same ways; however, the analogy also ends here for two significant reasons:

  1. While admins always have to contend with users who break systems unintentionally, they are not faced with armies of users distributed around the world with the sole intention of sabotaging their systems. Simple repetitive tasks can be automated. Accurately discerning behavior and intention within environments that are difficult or impossible to accurately model in the first place is a fool's quest.
  2. Automation of IT infrastructure (DevOps) has led to many positive outcomes, such as requiring fewer people to manage more systems. This works for knowledge domains that slowly evolve and/or are hyper-focused on a specific component of a system. In security, however, the knowledge domain is not dictated by just "security practices" (quite limited), but rather the security professional must be knowledgeable about how technologies are abused across all the legitimate technologies and architectures adopted in the enterprise, most of which evolve extremely rapidly.

Compensating for Automation
Where does this leave the security industry? Is it possible to find a balance? The offshoot of the Leftover Principle is called the Compensatory Principle. This theory says that there are tasks that humans do well that machines don't. People and machines should focus on what they do well, compensating for each other's shortcomings. 

Attempting to automate humans out of cybersecurity is detrimental to our industry and destined to fail, primarily because we're not facing a tech opponent — we're facing human adversaries who go to great lengths to find weaknesses to exploit. Because so much is automated now, security analysts simply aren't required to go to the same depths, which is creating an even wider and more detrimental gap between attackers and defenders.

What's an example of "leftover" work today? The work that nowadays we call hunting — the responsibility of the team to compensate for the ineffectiveness of automated systems — is one example. The inability of most teams to hunt has created a perception that work isn't getting done because there's no talent to do it. The reality is that automation is making matters worse in this context, because effective hunting is based on the analyst having learned the more fundamental techniques while completing more "simple" tasks.

What's the solution? How do we embrace machine learning and automation without making our situation worse?

Organizations need to focus automation on the tedious and error-prone tasks that drive security analyst burnout —while leaving jobs needing more discernment to analysts.  

For instance, automating parts of the alert investigation process can have a huge impact on security analyst productivity. Automating things such as tracking a device as it moves across the network and identifying infected devices by its human owner and their behaviors, rather than ephemeral identifiers like IP addresses (which require more human work to then identify the owner), can be enormously helpful and positive for analysts.

Like many of the overhyped features we've seen over the past couple of decades, from anomaly detection (early 2000s) to analytics (late 2000s), automation is not a cure-all for our cybersecurity woes of today. And worse, without a clear understanding and strategy for how automation will improve the work of your employees, automation might make some of your challenges worse — in a way that could be difficult to compensate for later.  

Related Content:

Gary Golomb has nearly two decades of experience in threat analysis and has led investigations and containment efforts in a number of notable cases. With this experience — and a track record of researching and teaching state-of-the art detection and response ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Weaponizing IPv6 to Bypass IPv4 Security
John Anderson, Principal Security Consultant, Trustwave Spiderlabs,  6/12/2018
'Shift Left' & the Connected Car
Rohit Sethi, COO of Security Compass,  6/12/2018
Why CISOs Need a Security Reality Check
Joel Fulton, Chief Information Security Officer for Splunk,  6/13/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-12557
PUBLISHED: 2018-06-19
An issue was discovered in Zuul 3.x before 3.1.0. If nodes become offline during the build, the no_log attribute of a task is ignored. If the unreachable error occurred in a task used with a loop variable (e.g., with_items), the contents of the loop items would be printed in the console. This could ...
CVE-2018-12559
PUBLISHED: 2018-06-19
An issue was discovered in the cantata-mounter D-Bus service in Cantata through 2.3.1. The mount target path check in mounter.cpp `mpOk()` is insufficient. A regular user can consequently mount a CIFS filesystem anywhere (e.g., outside of the /home directory tree) by passing directory traversal sequ...
CVE-2018-12560
PUBLISHED: 2018-06-19
An issue was discovered in the cantata-mounter D-Bus service in Cantata through 2.3.1. Arbitrary unmounts can be performed by regular users via directory traversal sequences such as a home/../sys/kernel substring.
CVE-2018-12561
PUBLISHED: 2018-06-19
An issue was discovered in the cantata-mounter D-Bus service in Cantata through 2.3.1. A regular user can inject additional mount options such as file_mode= by manipulating (for example) the domain parameter of the samba URL.
CVE-2018-12562
PUBLISHED: 2018-06-19
An issue was discovered in the cantata-mounter D-Bus service in Cantata through 2.3.1. The wrapper script 'mount.cifs.wrapper' uses the shell to forward the arguments to the actual mount.cifs binary. The shell evaluates wildcards (such as in an injected string:/home/../tmp/* string).