Perimeter
6/18/2012
05:07 PM
Wendy Nather
Wendy Nather
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Logging Smarter, Not Just Harder

The problem is not just Big Data -- it's variable data. We attempt to find the answer in late-night commercials

In many circles, Big Data seems to mean "more data than our system can handle," in which case you might just have a lousy system. I've also seen it used to mean "data volumes that our product can handle and theirs can't." Whenever it's used in this way, it appears that size is the important factor here, so can we just call it "Moby Data" instead?

In any case, it presents a problem for security monitoring -- not just because of size, and not just because of variety, but because of variability. I was greatly interested in a blog post on Packet Pushers by the Socratically named Mrs. Y, on thin-slicing security data.. She talks about the unknown unknowns, but it's not just about detecting those. She also points out that when piped through a complex decision-making process -- such as with security monitoring -- massive amounts of varied data can result in information overload:

Maybe the application of Thin-slicing techniques applied to the right data could make a difference, because I think it’s obvious we can’t continue in this current direction.

How do we determine the "right" data? In security, we have multiple techniques for identifying, reducing, exploring, and detecting. The word "signature" has become such a dirty word that many who actually use it won’t admit to it. ("They’re not signatures! They’re rules!") But fundamentally speaking, we use different kinds of signatures when trying to classify events for the purposes of detecting and deciding. We’re either looking for "anything that is X," or defined, known badness (i.e., a blacklist), or "anything that is not Y," which is defined, known goodness (i.e., a whitelist).

If you want to be less judgmental, you move to anomaly detection, which was first proposed for intrusion detection by Dorothy Denning in the '80s: You collect a whole bunch of data, categorized by type, such as user activities, network traffic, configuration states, and so on. Then you create a profile based on statistical analysis of each data category. Don’t fool yourself, though: It’s still a signature.*

Even after you’ve decided that you’ve collected enough data to perform decent statistical analysis, and have a system for detecting outliers (anomalies), you’ll still need to investigate them so that you can label them as "new or additional goodness" (i.e., false positives) and "badness" (better call out the troops). That’s the challenge with all these types of detection: They assume there is a pattern so static that you can define it, and give it to something automated to monitor.

And real life isn’t always like that. Real attackers aren’t like that, either. Our systems and users change, and adversaries adapt, and it’s very hard to compensate for one while still catching the other.

Another option would be to classify data further, as more static or more dynamic -- as patterns or statistics that are expected not to change very much over time, such as an assigned IP address, and those that are expected to drift (user interaction patterns with an application that gets new features). The latter you’ll need to assess and tweak more often, as time goes by and the "normal" state of the data changes; it also helps to have reasonable heuristics in place that can work within a certain range of variation. Binary security decisions are what lead to a plague of false positives.

Would we be better off with less data? I don’t know of anyone who wants to miss anything; security professionals tend to be data hoarders, and the events that looked innocuous last month suddenly become sinister when put together with new ones. Thin-slicing, or statistical sampling, may appear to make the volume problem more manageable, and it might work for static data profiles in a moby data store. But I think what we really need is tiered processing of security data, starting with the most static -- and therefore the most confident -- data decisions, and working with multiple analysis techniques until the most variable data floats to the top -- the kind that changes all the time, and always requires context and external information that a SIEM can’t have (it’s not a malicious DoS attack; your site got Huffposted).

It’s not thin-slicing; it’s multislicing. Or slicing and dicing. It’s the Ginsu knife model of security monitoring.

* An activity profile characterizes the behavior of a given subject (or set of subjects) with respect to a given object (or set thereof), thereby serving as a signature or description of normal activity for its respective subject(s) and object(s). -- Denning

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Must Reads - September 25, 2014
Dark Reading's new Must Reads is a compendium of our best recent coverage of identity and access management. Learn about access control in the age of HTML5, how to improve authentication, why Active Directory is dead, and more.
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-6856
Published: 2014-10-02
The AHRAH (aka com.vet2pet.aid219426) application 219426 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2014-6857
Published: 2014-10-02
The Car Wallpapers HD (aka com.arab4x4.gallery.app) application 1.3 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2014-6858
Published: 2014-10-02
The Mostafa Shemeas (aka com.mostafa.shemeas.website) application 1.0 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2014-6859
Published: 2014-10-02
The Daum Maps - Subway (aka net.daum.android.map) application 3.9.1 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2014-6860
Published: 2014-10-02
The Trial Tracker (aka com.etcweb.android.trial_tracker) application 1.1.9 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Chris Hadnagy, who hosts the annual Social Engineering Capture the Flag Contest at DEF CON, will discuss the latest trends attackers are using.