01:03 AM
Connect Directly

Why Are We So Slow To Detect Data Breaches?

Poor instrumenting of network sensors, bad SIEM tuning, and lack of communication between security team members allow breaches more time to fester

Security breach response times can be a crucial factor in determining the difference between a minor security incident and a major data breach with far-reaching business effects. And, yet, most organizations today are slow to detect breaches. What's worse, many have a deflated sense of how long it really takes for them to sniff out an attacker on their networks. This lack of speediness and lack of awareness of that weakness plays right into the hands of attackers who are crafting long-term attacks with the strategy of staying hidden on network resources for extended periods of time.

"The longer it takes to respond, the more firmly rooted the attacker will become, and more difficult and costly it will be to find and remove all of their implants," says James Phillippe, leader of threat and vulnerability services for the U.S. at Ernst & Young. "More importantly, the longer it takes, the more likely an attacker is to find and exfiltrate the organization's 'secret sauce.'"

Fighting A Perception Problem
The difficulty in achieving timely detection is that many line-of-business and even IT leaders think their organizations are doing a good job already. This week, a survey out from McAfee that questioned 500 senior IT decision makers had them reporting that it took an average of 10 hours to detect a breach. But other breach statistics and anecdotal evidence provides evidence to the contrary.

According to the Verizon Data Breach Investigation Report, 66 percent of breaches took months or even years to discover. And a recent Ponemon Institute report sponsored by Solera Networks, a Blue Coat company, found that, on average, it is taking companies three months to discover a malicious breach and then more than four months to resolve it.

"This misplaced confidence in their response demonstrates the disconnect between business leaders and the security team that can happen," says Gretchen Hellman, director of product marketing of SIEM for McAfee.

The statistics that suggest long times for breach detection are backed up with plenty more anecdotal stories from security professionals in the trenches.

"Many organizations simply don't have enough tech and security staff to notice these breaches when they occur. I worked with a university that didn't notice they had a data breach for almost six months, until a governmental organization notified them that an attack had likely been carried out against their servers," says Jonathan Weber, founder of Marathon Studios, who believes that most data breaches are so slow to be discovered because attacks today rarely offer external disruption to essential services. "Once the initial leak event has passed, there are little or no indicators of the breach until the hackers return or the data surfaces elsewhere."

Because organizations don't know what they don't know, the perception problem lingers. The disconnect stems from the very same fundamental visibility shortcomings that are allowing attackers to extend their stays on enterprise networks in the first place.

"At this point, organizations need to increase their visibility into what's happening in their enterprises and focus on eliminating those cybersecurity blind spots," says Jason Mical, vice president of cybersecurity for AccessData.

One security leader, Mike Parrella, director of operations for managed services at Verdasys, was more blunt about why he believes organizations have not worked to improve visibility on their networks.

"The main reason is because businesses and government alike are filled with idiots and ostriches," he says. "People are simply not looking for a leak -- they would rather not look, not be bothered, not spend to solve the problem, and so they are not finding. They prefer to outrun their risk."

Instrumenting Sensors For Detection
To be fair, attackers have invested incredible amounts of money and time into devising methods to breaking in and stealing data under the proverbial radar. But experts say there are ways to adjust the monitoring and intelligence paradigms at enterprises to account for that.

Rather than thinking of defending the enterprise like a bank vault with a big door, says Dr. Mike Lloyd, CTO of RedSeal Networks, more organizations should think of it as similar to the way you'd secure a city.

"It's big, it's sprawling, it changes all the time without advance notice. You have to think about maps, about sensors, and you have to know where the pinch points are -- where you have 'threats' like ammonia trains running on lines that happen to cross the same cheap land you built your football stadium on," he says. "That's why the industry is talking so much about Big Data -- the hope [currently unfulfilled] is that if we can pile together all the overwhelming separate piles of sensor data, making an even bigger, even more overwhelming mountain, that we'll be able to make sense out of it and pick up the patterns of attack."

Lloyd believes that most network monitoring sensor infrastructure today is poorly instrumented, and he's not alone.

"More often than not, mistakes are made in the poor placement of monitoring technology," says Peter Tran, senior director of RSA Advanced Cyber Defense Practice. "From a strategic design perspective, enterprises need to approach detection in terms of behaviors indicative of exploitation rather than static rules triggered on known bad indicators of compromise. This is a shift to intelligence-driven security and a break from 'network-centric' security to 'data in motion' based on behavioral-driven analytics."

Starting out, Lloyd recommends organizations take three first steps. First they should map infrastructure so they get a lay of the land to figure out where to put sensors. Next, they should identify obvious weak points. And, finally, they should start to design zones into the infrastructure so that monitoring can be done more easily at zone boundaries.

[How have attackers managed to 'break' AV with a glut of malware? See 10 Ways Attackers Automate Malware Production.]

Data Analysis Is Key
As important as it is to determine where sensors are put, it is equally important to adjust what exactly they're looking for, says Wade Williamson, senior security analyst for Palo Alto Networks. As he puts it, most security products aren't necessarily designed to detect breaches, per se.

"Fundamentally, if you look at most networks, security is overwhelmingly focused on detecting and blocking a malicious payload. This is a pretty reasonable approach, but it's also incomplete," he says. "Breaches rely on a host of tools to investigate, gather data, and communicate to the remote attacker, and detecting these tools becomes just as important as stopping malicious payloads."

He recommends that organizations be on the lookout for custom tunnels, unauthorized proxies, RDP, and file transfer applications.

But appropriately setting up sensors is the easy first step. From there, organizations have to figure out how to put to good use all of the data those sensors are spewing out. It's this data that holds the key to driving down the time to detect breaches.

"One of the most powerful drivers is data, both unstructured and structured, particularly cross-correlated, analyzed from external sources relative to historical data and trending," Tran says. "This strategically gives an enterprise the ability to trend, score, and predict how likely they are to be targeted."

According to Phillippe, the problem is that even at organizations that do have security information and event management (SIEM) tools in place, most have not tuned them well.

"A well-tuned SIEM is the heart of a security operations center and enables alerting to be accurate and complete," he says. "That way when an analyst gets an alert, they know all of the necessary context to respond quickly and comprehensively."

As essential as tools are to reducing the time it takes to detect a breach, even more critical is how well the people who run those tools put them to good use.

"Getting a list of 'convicted' systems that are doing remote callbacks indicating a compromise by a botnet is one thing; getting the boots on the ground to find the machine, capture it, analyze the breach, and reimage can be a daunting experience for a large enterprise," says Ray Zadjmool, principal consultant for Tevora Business Solutions. "Where is it? Who owns it? Who to call? All of this translates to slow detection and decreased response time."

One particular difficulty that organizations face is in streamlining the collaboration between various security and operations team members. Even with all of the right data residing within the organization as an aggregate, it is very easy to fail to put all of the puzzle pieces together due to a lack of coordination.

"Right now, most organizations still have disparate teams, each using several disparate tools," Mical says. "They have to correlate all the critical data manually. It causes dangerous delays in validating suspected threats or responding to known threats."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message. Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
7/9/2013 | 10:26:44 AM
re: Why Are We So Slow To Detect Data Breaches?
And remember real-time attack detection and adaptive response within applications themselves where there is knowledge of the user context. Like this
User Rank: Apprentice
6/24/2013 | 8:51:55 PM
re: Why Are We So Slow To Detect Data Breaches?
Erika - The 2013 Verizon Data Breach Report you point to
also highlights how innocent a data breach can appear. Three out of four
intrusions exploit weak or stolen (but otherwise legitimate) credentials, and
another 13 percent result from misuse of information by privileged users,
according to the Report. Organizations need new ways to detect misuse of
information systems. While an account may be legitimate, generally the action
taken is not which can provide early detection of a potential breach.

This is why we see the security industry focusing on the
need for real-time security intelligence and big data, particularly related to
identities, their access, and behavior data to reveal patterns that look risky.
By having a way to analyze risk associated with user access on a continuous
basis, organizations will be able to better protect themselves against internal
and external threats.
James McCabe
James McCabe,
User Rank: Apprentice
6/24/2013 | 2:29:58 PM
re: Why Are We So Slow To Detect Data Breaches?
Ericka makes some well thought out points regarding detection of data breaches. But she misses a major point. Detection is a reactionary response! I think it's great that we can do all this tuning to our sensors and SIEMS, but we're still not PROTECTING the data! You must put protection controls closer to the target - the DATA itself. It must have strong usage policies and encryption. It must restrict who the data is decrypted for. Data should never be decrypted for Administrators/superusers/Root level users. They need to administer the environment in which the data runs in. They are not paid to "look"at data. Having these types of controls in place will go a long way in reducing the attack surface! Let's get our heads out of sand and start thinking about controls closer to the target. We need a paradigm shift in our security thinking.
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
WSJ Report: Facebook Breach the Work of Spammers, Not Nation-State Actors
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/19/2018
4 Ways to Fight the Email Security Threat
Asaf Cidon, Vice President, Content Security Services, at Barracuda Networks,  10/15/2018
Register for Dark Reading Newsletters
White Papers
Latest Comment: Too funny!
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
PUBLISHED: 2018-10-16
Z-BlogPHP (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.