Perimeter
1/14/2013
10:20 AM
Wendy Nather
Wendy Nather
Commentary
50%
50%

All Your Base Are In An Indeterminate State

Or the importance of timeliness in monitoring

Does your data need to be poppin' fresh, organic, and locally sourced? Maybe not; it depends on how and why you’re consuming it.

Organizations that have figured this out tend to have tiers of data. There’s the kind of live data that must be consumed immediately and refreshed as soon as a change comes along. (Think of this as soufflé data if you like. You have to rush it out of the oven before it falls. Or the lettuce in your refrigerator that really isn’t worth it once it wilts; you’re better off getting a new head.) Live data drives immediate responses: trading transactions, stock prices, credit card processing, industrial control data, vital signs during a medical emergency, or altitude and speed data as a plane is landing. Live data will be kept as close to the consumption point as possible and will receive most of the storage, delivery, and access resources so that it can be updated as fast as it changes.

Then there’s cruising speed data -- which you might update on a regular basis, but its timeliness isn’t as vital. For example, you could check once a day to see whether yesterday’s terminated employees had their access revoked by the evening. It’s still important, but not so much that you need up-to-the-minute reports vying for your attention. This data could be kept where it is generated and only presented as scheduled. To extend the grocery analogy, this would be the bottles of milk delivered to your door (does anyone else remember that, by the way?).

Reference data, or rarely used data, can be stored near-line or offline. These are those spices in your kitchen that you pick up every couple of years, decide that they’ve expired, and go out to get new ones. You can’t get rid of one entirely because you never know when you’re going to need nutmeg. Historical security data needs to be available for audits, or for, "Hey, haven’t we seen this before?" situations, but it should be delivered on demand and stay out of the way when it’s not needed.

Despite the attractiveness of "Big Data," don’t fall into the trap of thinking this means you can put all these types of data together in one big, honking Hadoop. It’s like getting a gigantic freezer and thinking you can now fill it up with those huge packs of chicken wings that you only eat during football season. Cloud storage has made it easier to keep reference data near-line. (I don’t consider it to be completely online if it’s free to upload, but you have to pay for restoring.)

The important thing about security monitoring data is that its timeliness and velocity depend on how quickly you can do something with it. If you have enough resources to be able to take immediate action on an alert, or if you have automation in place that can change configurations on the fly (say, generate new IPS rules), then shrinking that "real time" window of data delivery makes sense -- and there are plenty of vendors out there that claim faster and faster speeds for that. But if you can only find time to review logs once a month, then syslog and grep are probably as much as you need; don’t spend money on a fancy SIEM if you can’t drive it more often than just on Sundays.

And if you’re an organization that can’t afford log storage at all -- I know you’re out there -- and you have the equivalent of an empty fridge with a jar of mustard and two beers, and you go out for meals all the time, then think about the nutritional value of your data and what it’s costing you to have someone else dig it up for you in emergencies. This is probably why you’re not discovering that you’ve been breached until law enforcement tells you about it six months later.

Now that we have a fresh start with the new year, and you’re reorganizing your pantry and freezer anyway, you might as well review your security data storage. Look at your response requirements and capabilities and then decide what needs to be pushed to the front. While you’re at it, you might take some Windex to that "single pane of glass," if you have one for dashboards and such. Your mother -- I mean, auditor -- will be proud.

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-0290
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

CVE-2016-10369
Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

CVE-2016-8202
Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

CVE-2016-8209
Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

CVE-2017-0890
Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.