Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

2/21/2019
10:30 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

Security Analysts Are Only Human

SOC security analysts shoulder the largest cybersecurity burden. Automation is the way to circumvent the unavoidable human factor. Third in a six-part series.

We all make mistakes sometimes, which is why we need to factor in human error as part of the cybersecurity process. This series explores the human element of cybersecurity from six perspectives of fallibility: end users, security leaders, security analysts, IT security administrators, programmers, and attackers. So far, we have addressed end users and security leaders. This week, we cover security analysts.

Security analysts work in dedicated security operations centers (SOCs) as part of a team, which often works in shifts around the clock, to prevent, detect, assess, and respond to cybersecurity threats and incidents. Security analysts are sometimes responsible for fulfilling and assessing regulatory compliance pertaining to security as well. While there are a variety of managed security service providers who handle SOC activities as an outsourced function, organizations — especially enterprises — often develop their own in-house capabilities to handle some, if not all, of the SOC work.

Typically, these security analysts are cybersecurity professionals who are responsible for reviewing/triaging alerts and incident response. They can have expertise in network analysis, forensic analysis, malware analysis, and/or threat intelligence analysis. Their skill set is difficult to find; there is a well-publicized cybersecurity workforce shortage and currently 0% unemployment in the industry, according to Cybersecurity Ventures. Security analysts usually report to cybersecurity managers, who then assimilate and deliver SOC information and insights to be delivered to boards and C-level executives.

Common Mistakes
The average SOC receives 10,000 alerts each day from layers of monitoring and detection products. Some of the alerts are attacks from an ever-growing number of threat actors of varying sophistication, but a significant percentage (in many cases upward of 80%) are false positives. With such an overwhelming barrage of alerts, it is almost inevitable that an analyst will eventually miss or ignore an alert, or fail to identify a high priority alert due to "alert fatigue" or incorrect prioritization. Resource-constrained security analysts who may lack time, understanding, a well-trained eye, or in some cases, motivation, often triage only less than 10% of incoming alerts, prioritizing incidents that have out-of-the-box priority levels or are similar to what they have seen before. In addition, when an incident needs lengthy analysis, the security analyst may not be given the time to conduct a full analysis and consequently reports inaccurate or incomplete information about the attack.

Beyond triage and response mistakes, security analysts may make other errors such as incorrectly configuring security products. When an incident has been missed, or a configuration error has been made, security analysts may not be inclined to reveal the extent of the damage because of the potential for personal repercussions, compounding the problem.

Repercussions
When a security analyst fails to address or prioritize an alert, response can be significantly delayed or neglected entirely and a device or system can be compromised. This naturally could lead to a data breach, disruption of business, data exfiltration, and/or data destruction. Often the incidents are discovered and responded to much later than they would have been otherwise, amplifying the complexity and cost of containment and remediation as the security analysts identify the attack vector and extent of the attack. Moreover, deliberate or accidental misinformation from security analysts could put security leaders in a position where they deliver inaccurate reports, which in turn could be relayed externally with varying implications for important stakeholders.

Minimizing Mistakes
Given the sheer volume of alerts that security analysts see, we must concentrate on reducing the volume burden. This can be achieved by fine-tuning security solutions to reduce false positives, paring down any overlap in monitoring that creates redundancy, and automating as many analyst tasks as possible. Additionally, the number of alerts can be reduced when there is a strong prevention base. This starts with coordinating with the vulnerability management team to ensure that devices, operating systems, and applications are configured and patched properly. Beyond that, we need solutions that effectively triage and calculate priority values, incorporating threat intelligence, and organization-specific data such as the criticality of affected systems. In addition, we have to accept that security analysts need time to thoroughly conduct analysis and that updates they provide as they progress may differ from their final reports.

Change the Paradigm
As the resources on the front line, let us recognize that SOC security analysts shoulder the largest cybersecurity burden — in many cases addressing incident detection and response 24 hours a day, 365 days a year — and many of the analyst positions need refactoring. The job of Tier 1 analysts who are triaging and reviewing alerts is unsustainable in its current form. The role needs to transition to a fully automated process and a movement is already underway to do so. By automating manual "crank-turning" with new technologies, analysts have an opportunity to learn higher-tier skills and apply more critical thinking and advanced analysis to the true incidents that need in-depth investigations. But these higher-tier security analysts also need adequate training as well as the time and space to do their work effectively, without having to fear personal repercussions when they make mistakes, as all humans do.

In addition, we have to hold detection product vendors accountable for the false-positive rates of their standard configurations. While it may be in the vendor's best interest to err on the side of reporting an alert if there is any possibility of it being a true positive, that methodology does a disservice to the end users who end up inundated with useless noise that detracts from finding the signal.   

Join us next time to examine the fourth perspective in our series: IT security administrators.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Roselle Safran is President of Rosint Labs, a cybersecurity consultancy to security teams, leaders, and startups. She is also the Entrepreneur in Residence at Lytical Ventures, a venture capital firm that invests in cybersecurity startups. Previously, Roselle was CEO and ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
RyanSepe
50%
50%
RyanSepe,
User Rank: Ninja
2/28/2019 | 9:35:01 PM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
@Joe, that's one of the inherent principles of my explanation. The fact that there is a security shortage of personel further compiles the dilema that, quantitatively, large amounts of incidents cannot be reviewed effectively by humans. It's the premise behind "Next Gen" software/services, utilizing AI and malicious processes over signature-based analysis. 

Yes there are deficiencies. But I believe it to be a better allocation of funding to try and create more proficient and consistent coding then trying to throw bodies at it retroactively for review. I understand that if there is a shortage in one security facet then it may persist into others. But coders and app dev individuals that could be helpful in this endeavor are not part of that shortage.

Respectfully, I understand your inquiry. But I'm an Security Engineer. Crafting solutions is part of my day to day and this is again just one person's opinion at a plausible solution. Without attempting any solutions, we will all pontificate until this article is re-written in the years to come.
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
2/27/2019 | 2:10:43 PM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
Agree - some time off, after all when Watson was tasked with diagnostics for cancer patients, the results would have killed some people.  True,  Don't think that is part of the medical oath and desired results field.  The cancer is, of course, eradicted along with the host. 
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Ninja
2/26/2019 | 10:33:26 PM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
@REISEN: It should, of course, be theoretically possible to get to the point in AI/ML when a "robot" could use tactile senses just as well as other "senses" in performing surgical functions. That said, I suspect we're a ways off.
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
2/26/2019 | 8:40:02 AM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
Ages ago I was discussing robot surgery with a dentist and he pointed out that however magnificent the results may be --- a robotic arm or hand lacks the ability of the human hand to "feel" something and evalute it by intuitive work rather than access of a database.  True and same applies for cyber.  Some human thought (not Vulcan logic) applies here.  We "know" certain things that cannot be quantified as wrote answers.  
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Ninja
2/25/2019 | 8:07:13 PM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
@Ryan: Of course, the thing to remember moving forward is that, if we accept the current narrative (which I don't, but that's a post for another day), there is a drastic shortage of cybersecurity talent. Consequently, assuming the correctness of that premise, where's the talent to make sure that the automation is working properly in and is properly tailored for the customer organization?
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Ninja
2/25/2019 | 8:05:12 PM
Re: Automation is KEY
@Dr.T: Moreover, sometimes these analysts will see malicious traffic and give a heads up to an affected organization -- who, sometimes, will expressly tell the tier-1 to not call them again (because they'd rather not know, because of the compliance triggers).

Perverse, but it happens.
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Ninja
2/25/2019 | 8:01:44 PM
Automate the fatigue?
Indeed, I've recently interviewed consultants on this very topic who are espousing the same message -- and pundits in the press and thought leadership are also calling for AI/ML/automation solutions in place of humans for handling the day-to-day. The machines don't get fatigued at the same rate as the humans do.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
2/25/2019 | 11:22:43 AM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
As stated, receiving 10K alerts per day would be an impossible task to review without automated logic built into the coding of your SOC I agree. Also stated most are false possitive. One option could be generating those alerts in more intelligent ways.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
2/25/2019 | 11:20:09 AM
Re: Minimizing Mistakes by Maximizing Actionable Intelligence
A human element will always be needed to one degree or another but they are prone to error. Agree. As being the weakest link in overall security, we are vulnerable too.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
2/25/2019 | 11:19:00 AM
Re: Automation is KEY
A SOC, coupled with the right internal and external intelligence, plus orchestration can effectively automate Tier 1 Agree. They can also use AI offload some initial workload.
Page 1 / 2   >   >>
97% of Americans Can't Ace a Basic Security Test
Steve Zurier, Contributing Writer,  5/20/2019
TeamViewer Admits Breach from 2016
Dark Reading Staff 5/20/2019
How a Manufacturing Firm Recovered from a Devastating Ransomware Attack
Kelly Jackson Higgins, Executive Editor at Dark Reading,  5/20/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-5798
PUBLISHED: 2019-05-23
Lack of correct bounds checking in Skia in Google Chrome prior to 73.0.3683.75 allowed a remote attacker to perform an out of bounds memory read via a crafted HTML page.
CVE-2019-5799
PUBLISHED: 2019-05-23
Incorrect inheritance of a new document's policy in Content Security Policy in Google Chrome prior to 73.0.3683.75 allowed a remote attacker to bypass content security policy via a crafted HTML page.
CVE-2019-5800
PUBLISHED: 2019-05-23
Insufficient policy enforcement in Blink in Google Chrome prior to 73.0.3683.75 allowed a remote attacker to bypass content security policy via a crafted HTML page.
CVE-2019-5801
PUBLISHED: 2019-05-23
Incorrect eliding of URLs in Omnibox in Google Chrome on iOS prior to 73.0.3683.75 allowed a remote attacker to perform domain spoofing via a crafted HTML page.
CVE-2019-5802
PUBLISHED: 2019-05-23
Incorrect handling of download origins in Navigation in Google Chrome prior to 73.0.3683.75 allowed a remote attacker to perform domain spoofing via a crafted HTML page.