Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Perimeter

4/5/2013
09:40 AM
Wendy Nather
Wendy Nather
Commentary
50%
50%

Is There Any Real Measurement In Monitoring?

Show me metrics that aren't marketing

There’s less useful measurement in monitoring than you think.

You might say, "What are you talking about? There’s plenty! There are events per second, transactions per second, [mega|giga|tera|peta|exa]bytes of data, millions of malware samples, millions of botnet victims, number of false positives …"

But that’s not all that goes into marketing these days. Tell me, just how big do you have to get before you get to call yourself "Big Data"? What’s the number, and can we tell everyone who doesn’t meet that number, "Thanks for playing"? Or is the race just about "Whatever number they have, ours is bigger" ad infinitum (and ad nauseam)? Maybe everyone should just claim to be storing LOTTABYTES and be done with it.

Almost as soon as "Big Data" came along, there was someone to explain that it wasn’t the size that mattered; it was how you used it. (Maybe they were feeling inadequate and were compensating.) One of the first "new" metrics was based around speed: either throughput, or how close to real-time the processing occurred. Vendors touted their "line speed" or their ability to do all their analysis in-memory (since writing to disk tends to slow down the pipe a lot).

But what does speed matter, if the coverage is spotty? We’ve known for a long time that stateful firewalls, IDS/IPS and web application firewalls magically get a lot faster if you turn enough high-level checks off. Or, if you must have everything turned on – if you’ve gotta catch ‘em all – offloading them to specialized processors can keep the main traffic flowing unimpeded. But then you could argue that that’s cheating, and it’s not as close to real-time anymore.

Vendors also tout the number of inputs that go into their offerings: how many other security technologies they integrate with (where "integrate" may just mean "we consume syslog, CSV and XML"). If you want to get fancier than just saying what data formats you accept, you can say you have an API, regardless of how many other tools actually use it. (When it comes to integration claims, I think API is the new XML, but someone may want to dispute that.)

Now that we’ve put size, speed and coverage to bed, someone’s going to bring up "intelligence." How do you rate the analytics offered today with most monitoring systems? Do you measure it by the number of patents held by each vendor for their algorithms? Is it whether their data scientists went to Stanford or MIT? How about the number of factors they use to calculate their risk and severity scores? What’s the IQ of a SIEM?

After the analytics skirmishes, the other kind of "intelligence" came up, namely the number and variety of additional inputs to the algorithms: reputation, geolocation, indicators of compromise, or possibly the number of former government intelligence analysts in the research team (and/or on the board of directors).

It’s extremely hard to measure and compare intelligence in this context, so some vendors resort to counting false positives. I’m dubious about how well that works, since a false positive can be in the eye of the beholder. If an alert has to travel through only two levels of analyst instead of three before it gets discounted, is it "less false"?

And then it’s back to numbers: the number of external intelligence feeds that are used to enrich the data that the monitoring system processes. (Still with me? Stay with the group; don’t get lost.) But are ten feeds necessarily better than one? Are a hundred feeds better than ten? How much more confidence are you getting, and after which number of feeds does the confidence level plateau?

Finally, the latest attempt at differentiation uses the word "actionable." Again, how do you measure that? The word connotes a binary condition: either you can do something with it, or you can’t. Can one system produce data that is "more actionable" than another one, and if so, how do you prove it?

I expect that the next salvo fired in the Monitoring Metrics Wars will be the originality or uniqueness of the data. Perhaps the freshness, too. Not only will the data be processed "live" (which is supposed to be better than "real-time," I understand – or maybe it’s the other way around), but it’ll be newer than anyone else’s data, still dewy from the data fields. It’ll be organic, locally sourced, internally generated, and home-made. Just like Mother used to analyze.

One thing’s for sure: buyers will still be wading through the marketing morass, trying to search out bits of dry land that will hold up to a purchasing decision. Not only will they have trouble differentiating vendors and their offerings; they’ll also struggle to find metrics that tell them when their monitoring is good enough. There are few comparisons out there that are both objective and complete. But I personally would pay good money to see an Actionability Bakeoff.

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Major Brazilian Bank Tests Homomorphic Encryption on Financial Data
Kelly Sheridan, Staff Editor, Dark Reading,  1/10/2020
Exploits Released for As-Yet Unpatched Critical Citrix Flaw
Jai Vijayan, Contributing Writer,  1/13/2020
Microsoft Patches Windows Vuln Discovered by the NSA
Kelly Sheridan, Staff Editor, Dark Reading,  1/14/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Post a Comment
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
[Just Released] How Enterprises are Attacking the Cybersecurity Problem
[Just Released] How Enterprises are Attacking the Cybersecurity Problem
Organizations have invested in a sweeping array of security technologies to address challenges associated with the growing number of cybersecurity attacks. However, the complexity involved in managing these technologies is emerging as a major problem. Read this report to find out what your peers biggest security challenges are and the technologies they are using to address them.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-3683
PUBLISHED: 2020-01-17
The keystone-json-assignment package in SUSE Openstack Cloud 8 before commit d7888c75505465490250c00cc0ef4bb1af662f9f every user listed in the /etc/keystone/user-project-map.json was assigned full "member" role access to every project. This allowed these users to access, modify, create and...
CVE-2019-3682
PUBLISHED: 2020-01-17
The docker-kubic package in SUSE CaaS Platform 3.0 before 17.09.1_ce-7.6.1 provided access to an insecure API locally on the Kubernetes master node.
CVE-2019-17361
PUBLISHED: 2020-01-17
In SaltStack Salt through 2019.2.0, the salt-api NEST API with the ssh client enabled is vulnerable to command injection. This allows an unauthenticated attacker with network access to the API endpoint to execute arbitrary code on the salt-api host.
CVE-2019-19142
PUBLISHED: 2020-01-17
Intelbras WRN240 devices do not require authentication to replace the firmware via a POST request to the incoming/Firmware.cfg URI.
CVE-2019-19801
PUBLISHED: 2020-01-17
In Gallagher Command Centre Server versions of v8.10 prior to v8.10.1134(MR4), v8.00 prior to v8.00.1161(MR5), v7.90 prior to v7.90.991(MR5), v7.80 prior to v7.80.960(MR2) and v7.70 or earlier, an unprivileged but authenticated user is able to perform a backup of the Command Centre databases.