Attacks/Breaches

12/13/2017
10:30 AM
Gary Golomb
Gary Golomb
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

Automation Could Be Widening the Cybersecurity Skills Gap

Sticking workers with tedious jobs that AI can't do leads to burnout, but there is a way to achieve balance.

According to Cybersecurity Ventures, the cybersecurity skills shortage is now expected to hit 3.5 million positions by 2021 — a huge jump from current estimates of 1 million job openings.

To help compensate for the growing shortage of talent, the cybersecurity industry is embracing artificial intelligence and automation to fill the gap. But can automation actually make the skills gap even greater? Unfortunately, yes — but security can still find a balance.

The Leftover Principle of Automation
The concept of mechanizing human tasks to drive efficiency has been studied since the advent of industrial automation. The primary goal is to automate as much as possible and thus eliminate human decision making in the process because human decisions can be the most frequent source of error in a given process. Any task not assigned to machines is "left over" for humans to carry out.

The problem with this theory, especially in cybersecurity, is that only very well-understood (relatively simple) processes can be automated, meaning the tasks left for security teams are the hard tasks that can't be automated. These difficult tasks require security professionals who have experience and deep domain knowledge. 

This is exacerbating the vicious cycle of security analyst burnout we currently face: 

  • Tasks that provide a sense of completion/satisfaction are automated.
  • Security analysts are increasingly asked to work on tedious, arduous tasks that lead to burnout.
  • Analysts leave to find greener pastures, leaving the security operations center shorthanded.
  • Company struggles to find talent to fill the gap.
  • When security management finds someone to hire, they give the new employees tedious, arduous tasks that lead to burnout.
  • Wash. Rinse. Repeat.

Lessons from the '90s and the IT Community
This isn't the first time this phenomenon has reared its head in the technology world. We saw a similar cycle in the IT/sysadmin world 25+ years ago. The sysadmin of the '90s was near omnipotent when it came to domain knowledge of technology and IT systems. This was driven by need — IT professionals had to be able to fix every problem across technology infrastructure, and that infrastructure was nowhere near as reliable and interoperable as it is today.

As technology advanced, this need for all-knowing IT admins lessened, driven by technology improvements. This necessarily lessens the experience and accumulated knowledge gained from fixing systems and making sure they work together.

Today's IT professionals no longer implicitly acquire deep domain expertise on IT infrastructure in the same ways; however, the analogy also ends here for two significant reasons:

  1. While admins always have to contend with users who break systems unintentionally, they are not faced with armies of users distributed around the world with the sole intention of sabotaging their systems. Simple repetitive tasks can be automated. Accurately discerning behavior and intention within environments that are difficult or impossible to accurately model in the first place is a fool's quest.
  2. Automation of IT infrastructure (DevOps) has led to many positive outcomes, such as requiring fewer people to manage more systems. This works for knowledge domains that slowly evolve and/or are hyper-focused on a specific component of a system. In security, however, the knowledge domain is not dictated by just "security practices" (quite limited), but rather the security professional must be knowledgeable about how technologies are abused across all the legitimate technologies and architectures adopted in the enterprise, most of which evolve extremely rapidly.

Compensating for Automation
Where does this leave the security industry? Is it possible to find a balance? The offshoot of the Leftover Principle is called the Compensatory Principle. This theory says that there are tasks that humans do well that machines don't. People and machines should focus on what they do well, compensating for each other's shortcomings. 

Attempting to automate humans out of cybersecurity is detrimental to our industry and destined to fail, primarily because we're not facing a tech opponent — we're facing human adversaries who go to great lengths to find weaknesses to exploit. Because so much is automated now, security analysts simply aren't required to go to the same depths, which is creating an even wider and more detrimental gap between attackers and defenders.

What's an example of "leftover" work today? The work that nowadays we call hunting — the responsibility of the team to compensate for the ineffectiveness of automated systems — is one example. The inability of most teams to hunt has created a perception that work isn't getting done because there's no talent to do it. The reality is that automation is making matters worse in this context, because effective hunting is based on the analyst having learned the more fundamental techniques while completing more "simple" tasks.

What's the solution? How do we embrace machine learning and automation without making our situation worse?

Organizations need to focus automation on the tedious and error-prone tasks that drive security analyst burnout —while leaving jobs needing more discernment to analysts.  

For instance, automating parts of the alert investigation process can have a huge impact on security analyst productivity. Automating things such as tracking a device as it moves across the network and identifying infected devices by its human owner and their behaviors, rather than ephemeral identifiers like IP addresses (which require more human work to then identify the owner), can be enormously helpful and positive for analysts.

Like many of the overhyped features we've seen over the past couple of decades, from anomaly detection (early 2000s) to analytics (late 2000s), automation is not a cure-all for our cybersecurity woes of today. And worse, without a clear understanding and strategy for how automation will improve the work of your employees, automation might make some of your challenges worse — in a way that could be difficult to compensate for later.  

Related Content:

Gary Golomb has nearly two decades of experience in threat analysis and has led investigations and containment efforts in a number of notable cases. With this experience — and a track record of researching and teaching state-of-the art detection and response ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
WebAuthn, FIDO2 Infuse Browsers, Platforms with Strong Authentication
John Fontana, Standards & Identity Analyst, Yubico,  9/19/2018
NSS Labs Files Antitrust Suit Against Symantec, CrowdStrike, ESET, AMTSO
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/19/2018
Turn the NIST Cybersecurity Framework into Reality: 5 Steps
Mukul Kumar & Anupam Sahai, CISO & VP of Cyber Practice and VP Product Management, Cavirin Systems,  9/20/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Are you sure this is how we get our data into the cloud?
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-14633
PUBLISHED: 2018-09-25
A security flaw was found in the chap_server_compute_md5() function in the ISCSI target code in the Linux kernel in a way an authentication request from an ISCSI initiator is processed. An unauthenticated remote attacker can cause a stack buffer overflow and smash up to 17 bytes of the stack. The at...
CVE-2018-14647
PUBLISHED: 2018-09-25
Python's elementtree C accelerator failed to initialise Expat's hash salt during initialization. This could make it easy to conduct denial of service attacks against Expat by contructing an XML document that would cause pathological hash collisions in Expat's internal data structures, consuming larg...
CVE-2018-10502
PUBLISHED: 2018-09-24
This vulnerability allows local attackers to escalate privileges on vulnerable installations of Samsung Galaxy Apps Fixed in version 4.2.18.2. An attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability. The specific flaw exist...
CVE-2018-11614
PUBLISHED: 2018-09-24
This vulnerability allows remote attackers to escalate privileges on vulnerable installations of Samsung Members Fixed in version 2.4.25. An attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability. The specific flaw exists wit...
CVE-2018-14318
PUBLISHED: 2018-09-24
This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of Samsung Galaxy S8 G950FXXU1AQL5. User interaction is required to exploit this vulnerability in that the target must have their cellular radios enabled. The specific flaw exists within the handling of ...