Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Perimeter

4/5/2013
09:40 AM
Wendy Nather
Wendy Nather
Commentary
50%
50%

Is There Any Real Measurement In Monitoring?

Show me metrics that aren't marketing

There’s less useful measurement in monitoring than you think.

You might say, "What are you talking about? There’s plenty! There are events per second, transactions per second, [mega|giga|tera|peta|exa]bytes of data, millions of malware samples, millions of botnet victims, number of false positives …"

But that’s not all that goes into marketing these days. Tell me, just how big do you have to get before you get to call yourself "Big Data"? What’s the number, and can we tell everyone who doesn’t meet that number, "Thanks for playing"? Or is the race just about "Whatever number they have, ours is bigger" ad infinitum (and ad nauseam)? Maybe everyone should just claim to be storing LOTTABYTES and be done with it.

Almost as soon as "Big Data" came along, there was someone to explain that it wasn’t the size that mattered; it was how you used it. (Maybe they were feeling inadequate and were compensating.) One of the first "new" metrics was based around speed: either throughput, or how close to real-time the processing occurred. Vendors touted their "line speed" or their ability to do all their analysis in-memory (since writing to disk tends to slow down the pipe a lot).

But what does speed matter, if the coverage is spotty? We’ve known for a long time that stateful firewalls, IDS/IPS and web application firewalls magically get a lot faster if you turn enough high-level checks off. Or, if you must have everything turned on – if you’ve gotta catch ‘em all – offloading them to specialized processors can keep the main traffic flowing unimpeded. But then you could argue that that’s cheating, and it’s not as close to real-time anymore.

Vendors also tout the number of inputs that go into their offerings: how many other security technologies they integrate with (where "integrate" may just mean "we consume syslog, CSV and XML"). If you want to get fancier than just saying what data formats you accept, you can say you have an API, regardless of how many other tools actually use it. (When it comes to integration claims, I think API is the new XML, but someone may want to dispute that.)

Now that we’ve put size, speed and coverage to bed, someone’s going to bring up "intelligence." How do you rate the analytics offered today with most monitoring systems? Do you measure it by the number of patents held by each vendor for their algorithms? Is it whether their data scientists went to Stanford or MIT? How about the number of factors they use to calculate their risk and severity scores? What’s the IQ of a SIEM?

After the analytics skirmishes, the other kind of "intelligence" came up, namely the number and variety of additional inputs to the algorithms: reputation, geolocation, indicators of compromise, or possibly the number of former government intelligence analysts in the research team (and/or on the board of directors).

It’s extremely hard to measure and compare intelligence in this context, so some vendors resort to counting false positives. I’m dubious about how well that works, since a false positive can be in the eye of the beholder. If an alert has to travel through only two levels of analyst instead of three before it gets discounted, is it "less false"?

And then it’s back to numbers: the number of external intelligence feeds that are used to enrich the data that the monitoring system processes. (Still with me? Stay with the group; don’t get lost.) But are ten feeds necessarily better than one? Are a hundred feeds better than ten? How much more confidence are you getting, and after which number of feeds does the confidence level plateau?

Finally, the latest attempt at differentiation uses the word "actionable." Again, how do you measure that? The word connotes a binary condition: either you can do something with it, or you can’t. Can one system produce data that is "more actionable" than another one, and if so, how do you prove it?

I expect that the next salvo fired in the Monitoring Metrics Wars will be the originality or uniqueness of the data. Perhaps the freshness, too. Not only will the data be processed "live" (which is supposed to be better than "real-time," I understand – or maybe it’s the other way around), but it’ll be newer than anyone else’s data, still dewy from the data fields. It’ll be organic, locally sourced, internally generated, and home-made. Just like Mother used to analyze.

One thing’s for sure: buyers will still be wading through the marketing morass, trying to search out bits of dry land that will hold up to a purchasing decision. Not only will they have trouble differentiating vendors and their offerings; they’ll also struggle to find metrics that tell them when their monitoring is good enough. There are few comparisons out there that are both objective and complete. But I personally would pay good money to see an Actionability Bakeoff.

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Unreasonable Security Best Practices vs. Good Risk Management
Jack Freund, Director, Risk Science at RiskLens,  11/13/2019
Breaches Are Inevitable, So Embrace the Chaos
Ariel Zeitlin, Chief Technology Officer & Co-Founder, Guardicore,  11/13/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19010
PUBLISHED: 2019-11-16
Eval injection in the Math plugin of Limnoria (before 2019.11.09) and Supybot (through 2018-05-09) allows remote unprivileged attackers to disclose information or possibly have unspecified other impact via the calc and icalc IRC commands.
CVE-2019-16761
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the [email protected] npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. All versions >1.0...
CVE-2019-16762
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the slpjs npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. Affected users can upgrade to any...
CVE-2019-13581
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A heap-based buffer overflow allows remote attackers to cause a denial of service or execute arbitrary ...
CVE-2019-13582
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A stack overflow could lead to denial of service or arbitrary code execution.