Perimeter
6/18/2012
05:07 PM
Wendy Nather
Wendy Nather
Commentary
50%
50%

Logging Smarter, Not Just Harder

The problem is not just Big Data -- it's variable data. We attempt to find the answer in late-night commercials

In many circles, Big Data seems to mean "more data than our system can handle," in which case you might just have a lousy system. I've also seen it used to mean "data volumes that our product can handle and theirs can't." Whenever it's used in this way, it appears that size is the important factor here, so can we just call it "Moby Data" instead?

In any case, it presents a problem for security monitoring -- not just because of size, and not just because of variety, but because of variability. I was greatly interested in a blog post on Packet Pushers by the Socratically named Mrs. Y, on thin-slicing security data.. She talks about the unknown unknowns, but it's not just about detecting those. She also points out that when piped through a complex decision-making process -- such as with security monitoring -- massive amounts of varied data can result in information overload:

Maybe the application of Thin-slicing techniques applied to the right data could make a difference, because I think it’s obvious we can’t continue in this current direction.

How do we determine the "right" data? In security, we have multiple techniques for identifying, reducing, exploring, and detecting. The word "signature" has become such a dirty word that many who actually use it won’t admit to it. ("They’re not signatures! They’re rules!") But fundamentally speaking, we use different kinds of signatures when trying to classify events for the purposes of detecting and deciding. We’re either looking for "anything that is X," or defined, known badness (i.e., a blacklist), or "anything that is not Y," which is defined, known goodness (i.e., a whitelist).

If you want to be less judgmental, you move to anomaly detection, which was first proposed for intrusion detection by Dorothy Denning in the '80s: You collect a whole bunch of data, categorized by type, such as user activities, network traffic, configuration states, and so on. Then you create a profile based on statistical analysis of each data category. Don’t fool yourself, though: It’s still a signature.*

Even after you’ve decided that you’ve collected enough data to perform decent statistical analysis, and have a system for detecting outliers (anomalies), you’ll still need to investigate them so that you can label them as "new or additional goodness" (i.e., false positives) and "badness" (better call out the troops). That’s the challenge with all these types of detection: They assume there is a pattern so static that you can define it, and give it to something automated to monitor.

And real life isn’t always like that. Real attackers aren’t like that, either. Our systems and users change, and adversaries adapt, and it’s very hard to compensate for one while still catching the other.

Another option would be to classify data further, as more static or more dynamic -- as patterns or statistics that are expected not to change very much over time, such as an assigned IP address, and those that are expected to drift (user interaction patterns with an application that gets new features). The latter you’ll need to assess and tweak more often, as time goes by and the "normal" state of the data changes; it also helps to have reasonable heuristics in place that can work within a certain range of variation. Binary security decisions are what lead to a plague of false positives.

Would we be better off with less data? I don’t know of anyone who wants to miss anything; security professionals tend to be data hoarders, and the events that looked innocuous last month suddenly become sinister when put together with new ones. Thin-slicing, or statistical sampling, may appear to make the volume problem more manageable, and it might work for static data profiles in a moby data store. But I think what we really need is tiered processing of security data, starting with the most static -- and therefore the most confident -- data decisions, and working with multiple analysis techniques until the most variable data floats to the top -- the kind that changes all the time, and always requires context and external information that a SIEM can’t have (it’s not a malicious DoS attack; your site got Huffposted).

It’s not thin-slicing; it’s multislicing. Or slicing and dicing. It’s the Ginsu knife model of security monitoring.

* An activity profile characterizes the behavior of a given subject (or set of subjects) with respect to a given object (or set thereof), thereby serving as a signature or description of normal activity for its respective subject(s) and object(s). -- Denning

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2015-0543
Published: 2015-07-05
EMC Secure Remote Services Virtual Edition (ESRS VE) 3.x before 3.06 does not properly verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2015-0544
Published: 2015-07-05
EMC Secure Remote Services Virtual Edition (ESRS VE) 3.x before 3.06 does not properly generate random values for session cookies, which makes it easier for remote attackers to hijack sessions by predicting a value.

CVE-2015-2721
Published: 2015-07-05
Mozilla Network Security Services (NSS) before 3.19, as used in Mozilla Firefox before 39.0, Firefox ESR 31.x before 31.8 and 38.x before 38.1, Thunderbird before 38.1, and other products, does not properly determine state transitions for the TLS state machine, which allows man-in-the-middle attacke...

CVE-2015-2722
Published: 2015-07-05
Use-after-free vulnerability in the CanonicalizeXPCOMParticipant function in Mozilla Firefox before 39.0 and Firefox ESR 31.x before 31.8 and 38.x before 38.1 allows remote attackers to execute arbitrary code via vectors involving attachment of an XMLHttpRequest object to a shared worker.

CVE-2015-2724
Published: 2015-07-05
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 39.0, Firefox ESR 31.x before 31.8 and 38.x before 38.1, and Thunderbird before 38.1 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code v...

Dark Reading Radio
Archived Dark Reading Radio
Marc Spitler, co-author of the Verizon DBIR will share some of the lesser-known but most intriguing tidbits from the massive report