Perimeter
6/18/2012
05:07 PM
Wendy Nather
Wendy Nather
Commentary
50%
50%

Logging Smarter, Not Just Harder

The problem is not just Big Data -- it's variable data. We attempt to find the answer in late-night commercials

In many circles, Big Data seems to mean "more data than our system can handle," in which case you might just have a lousy system. I've also seen it used to mean "data volumes that our product can handle and theirs can't." Whenever it's used in this way, it appears that size is the important factor here, so can we just call it "Moby Data" instead?

In any case, it presents a problem for security monitoring -- not just because of size, and not just because of variety, but because of variability. I was greatly interested in a blog post on Packet Pushers by the Socratically named Mrs. Y, on thin-slicing security data.. She talks about the unknown unknowns, but it's not just about detecting those. She also points out that when piped through a complex decision-making process -- such as with security monitoring -- massive amounts of varied data can result in information overload:

Maybe the application of Thin-slicing techniques applied to the right data could make a difference, because I think it’s obvious we can’t continue in this current direction.

How do we determine the "right" data? In security, we have multiple techniques for identifying, reducing, exploring, and detecting. The word "signature" has become such a dirty word that many who actually use it won’t admit to it. ("They’re not signatures! They’re rules!") But fundamentally speaking, we use different kinds of signatures when trying to classify events for the purposes of detecting and deciding. We’re either looking for "anything that is X," or defined, known badness (i.e., a blacklist), or "anything that is not Y," which is defined, known goodness (i.e., a whitelist).

If you want to be less judgmental, you move to anomaly detection, which was first proposed for intrusion detection by Dorothy Denning in the '80s: You collect a whole bunch of data, categorized by type, such as user activities, network traffic, configuration states, and so on. Then you create a profile based on statistical analysis of each data category. Don’t fool yourself, though: It’s still a signature.*

Even after you’ve decided that you’ve collected enough data to perform decent statistical analysis, and have a system for detecting outliers (anomalies), you’ll still need to investigate them so that you can label them as "new or additional goodness" (i.e., false positives) and "badness" (better call out the troops). That’s the challenge with all these types of detection: They assume there is a pattern so static that you can define it, and give it to something automated to monitor.

And real life isn’t always like that. Real attackers aren’t like that, either. Our systems and users change, and adversaries adapt, and it’s very hard to compensate for one while still catching the other.

Another option would be to classify data further, as more static or more dynamic -- as patterns or statistics that are expected not to change very much over time, such as an assigned IP address, and those that are expected to drift (user interaction patterns with an application that gets new features). The latter you’ll need to assess and tweak more often, as time goes by and the "normal" state of the data changes; it also helps to have reasonable heuristics in place that can work within a certain range of variation. Binary security decisions are what lead to a plague of false positives.

Would we be better off with less data? I don’t know of anyone who wants to miss anything; security professionals tend to be data hoarders, and the events that looked innocuous last month suddenly become sinister when put together with new ones. Thin-slicing, or statistical sampling, may appear to make the volume problem more manageable, and it might work for static data profiles in a moby data store. But I think what we really need is tiered processing of security data, starting with the most static -- and therefore the most confident -- data decisions, and working with multiple analysis techniques until the most variable data floats to the top -- the kind that changes all the time, and always requires context and external information that a SIEM can’t have (it’s not a malicious DoS attack; your site got Huffposted).

It’s not thin-slicing; it’s multislicing. Or slicing and dicing. It’s the Ginsu knife model of security monitoring.

* An activity profile characterizes the behavior of a given subject (or set of subjects) with respect to a given object (or set thereof), thereby serving as a signature or description of normal activity for its respective subject(s) and object(s). -- Denning

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-4807
Published: 2014-11-22
Sterling Order Management in IBM Sterling Selling and Fulfillment Suite 9.3.0 before FP8 allows remote authenticated users to cause a denial of service (CPU consumption) via a '\0' character.

CVE-2014-6183
Published: 2014-11-22
IBM Security Network Protection 5.1 before 5.1.0.0 FP13, 5.1.1 before 5.1.1.0 FP8, 5.1.2 before 5.1.2.0 FP9, 5.1.2.1 before FP5, 5.2 before 5.2.0.0 FP5, and 5.3 before 5.3.0.0 FP1 on XGS devices allows remote authenticated users to execute arbitrary commands via unspecified vectors.

CVE-2014-5395
Published: 2014-11-21
Multiple cross-site request forgery (CSRF) vulnerabilities in Huawei HiLink E3276 and E3236 TCPU before V200R002B470D13SP00C00 and WebUI before V100R007B100D03SP01C03, E5180s-22 before 21.270.21.00.00, and E586Bs-2 before 21.322.10.00.889 allow remote attackers to hijack the authentication of users ...

CVE-2014-7137
Published: 2014-11-21
Multiple SQL injection vulnerabilities in Dolibarr ERP/CRM before 3.6.1 allow remote authenticated users to execute arbitrary SQL commands via the (1) contactid parameter in an addcontact action, (2) ligne parameter in a swapstatut action, or (3) project_ref parameter to projet/tasks/contact.php; (4...

CVE-2014-7871
Published: 2014-11-21
SQL injection vulnerability in Open-Xchange (OX) AppSuite before 7.4.2-rev36 and 7.6.x before 7.6.0-rev23 allows remote authenticated users to execute arbitrary SQL commands via a crafted jslob API call.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?