Normalizing security data spewing from tools across the enterprise is a key step in creating a consistent set of metrics to use in managing risk

Dark Reading Staff, Dark Reading

March 19, 2013

5 Min Read

With so much data streaming real-time from network logs, vulnerability managers, infrastructure monitoring tools, and security appliances across the enterprise, sometimes one of the most difficult first steps IT risk managers must make in developing a security metrics program is indexing that data into a set of consistent risk scoring that makes sense in the board room.

"You've got all these different controls, they all talk about assets differently, they all present different information," says Dwayne Melancon, CTO of Tripwire. "So how do I roll that up into a small number of indicators that actually helps me develop confidence that I'm secure or my risk score is going down?"

It's not an easy question to answer, he says, but it starts through some kind of data normalization process. Data normalization helps organizations make better apples-to-apples comparisons, or at very least something close. Apples-to-oranges is a better evaluation model than apples-to-lettuce, after all.

[Are you governing without good metrics? See Governance Without Metrics Is Just Dogma.]

"A project to normalize security metrics should focus on building a key set of security risks that can be evaluated through quantifiable, consistent, and measurable metrics over time," says Steve Schlarman, eGRC solutions manager at RSA, explaining that these metrics shouldn't be overly complex for the metric owners. "If the data takes too long to compile, report, or evaluate, then the metric owner will not be able to report consistently over time."

In order to normalize security data and tie it to metrics, Melancon says to start with the business first. Doing so will establish a shorter and more relevant list of data feeds that need to be normalized.

"I think one of the tendencies that a lot of security people tend to have is they start with the controls, and they end up with a lot more controls than they otherwise may need," he says.

So if it is a public company, get a sense for how the company makes money by reading annual reports and thinking critically about the biggest risks that threaten key revenue streams.

"Then back up and say, 'OK, what controls do we have that help us monitor and get better confidence around those things?'" he says, explaining that the data from those tools will be the ones around which organizations should start building security performance indicators and risk scores. As they seek ways to develop those, Melancon warns security professionals to remember that just as they prioritize security spending based on risk, they also need to prioritize how they examine and normalize data based on how important certain assets are to the business.

"Where this falls apart is a lot of organizations try to apply the same level of rigor across everything, and you just choke everybody," Melancon says. "Either you're too bureaucratic or too slow, or you're always frustrated. So if you start with what are our top critical services and assets associated with those, then you can at least adjust the shape of your spending to match the shape of your risk."

By evaluating business assets first to determine which data should be normalized and included within the metrics program, organizations will be able to streamline and simplify how many controls they need to have their measurements normalized. Not only does that help with consistency, but also responsiveness. "A realistic goal of data normalization is to be able to analyze useful data in real time, especially if the purpose is risk assessment and management," says Rick Aguirre, president of Cirries Technologies. "You want to know about threats as they happen, not three days later in some data pool somewhere. By far, most of the data generated by networks and devices is not useful." As an organization establishes normalization processes for better metrics, it is crucial to clearly define seven core attributes for each metric, Schlarman says. They are the metric description, metric measurement process or formula, metric ownership, metric scope, source of the metric, measurement frequency, and trend expectation. From there, the risk management team should be offering some sort of forum for metric owners to report on a consistent basis and do root cause analysis.

"The main goal is to set up a sustainable program, not a one-time effort," he says. "Then, over time, metrics can be 'activated' and 'retired' as necessary within the program."

At the moment, the industry is still "a little bit of the Wild, Wild West" in the way that most organizations apply security or risk ratings to their asset data, Melancon says. Some organizations apply confidentiality, integrity, or availability ratings to their assets and use that as a basis. Others might use some of the NIST frameworks to do so. One framework that he sees as having some good potential is the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) Framework, jointly developed by NIST and the DHS, which provides a good foundation for risk scoring.

"The concept is you take a whole bunch of different controls, like antivirus, IDS, IPS, file integrity monitoring, database activity monitoring, and all of these different scores, and roll them up into one composite indicator, and then you use that to track whether your risk is going up or down overall," Melancon says. "The idea is great, but the execution is really hard."

He believes that to become less unwieldy, the industry needs to come up with a lighter weight version of something like CAESARS so that if an organization has a limited budget or man-hours, they can still pinpoint five to 10 metrics to focus on.

From there, it will be much easier to offer line-of-business executives a consistent set of key performance indicators that they can easily understand. This is a critical point, says John Johnson, global security program manager for John Deere, who explains that executives don't like things like security heat maps or fancy threat graphics that get "down in the weeds" of security operations.

"Executives want to see the most boring stuff in the world. They just want to see a dot that follows a straight line," Johnson says. "They don't want a slope or a peak -- they don't want to know there was some virus out there last week. They just want to know, are you hitting these key performance indicators you are tracking and is what you're doing making sense?"

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights