10 Ways To Measure IT Security Program Effectiveness
The right metrics can make or break a security program (or a budget meeting).
March 16, 2015
![](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt3497128d72cf8642/64f0dc6c56940954a88492c9/employee.jpg?width=700&auto=webp&quality=80&disable=upscale)
Just how effective is all of that "soft" spending on security awareness training? Steve Santorelli of Team Cymru says there are ways to track and measure that, primarily through phishing and social engineering stress testing, where you test you staff for phishing awareness and social engineering awareness.
Basically, you run a fake phishing campaign and make a few hoax calls," says Santorelli, director of analysis and outreach for the research firm. "Reward and publicize good results, help failures to learn from their errors, and you'll have folks actively watching out for these attacks--for a few weeks at least."
One way to justify spend on those shiny boxes is to start tracking just how many of the overall security incidents detected by the organizations are done through an automated tool.
"This is a good one because it not only encourages you to become familiar with how incidents are detected, it also focuses you on automation, which reduces the need for 'humans paying attention' as a core requirement," says Dwayne Melancon, CTO of Tripwire. "It also makes it easier to lobby for funding from the business, since you can make the case that automation reduces the cost of security while lowering the risk of harm to the business from an unnoticed incident."
CISOs can show accountability by offering the CEO, board, and CFO visibility into their spending process by offering metrics on the percent of strategic IT security projects completed on time and on budget, says Dan Lohrmann, chief strategist and chief security officer at Security Mentor.
"This could be a project on encryption, new firewalls, or whatever the top security projects are," Lohrmann says. "This metric ensures that security is accountable for delivering ever-increasing value and improvements to the executive team."
Is your security program suffering from information overload? Measuring the time it takes to collect data compared to when it is analyzed can help answer that question.
"Reducing the analytical timeline allows IT teams to recognize and act more quickly to prevent or detect and addresses breaches, thereby improving the organizations overall security posture," says Christopher Morgan, president of IKANOW.
"Reducing the time it takes to analyze security data, from either internal firewall or SIEM information or outside threat intelligence feeds, requires giving data scientists the tools and time to focus on data analysis," he says.
This metric can also help get a bead on the effectiveness of the incident response and security analyst functions within a program.
"What is the rate of incidents handled by security team into which they have a full understanding of the reason for the alert, the circumstances causing it, its implications, and effect?" says Div of Cybereason.
The lower the rate compared to overall volume of opened cases will show gaps in visibility and could trigger an ask for more investment in human resources or tools.
(Image: Freeimages.com)
Tracking the total number of incident response cases opened against those closed and pending will help CISOs identify how well incidents are being found and addressed.
"This shows that incidents are being identified along with remediation and root cause analysis," says Shedd of WGM. "This is critical for continuous improvement of an information security program."
(Image: Freeimages.com)
In the same vein, patch latency can also show how effective the program is in reducing risk from the low hanging fruit.
"We need to demonstrate progress in the vulnerability patch process. For many organizations with thousands of devices, this can be a daunting task. Focus on critical vulnerabilities and report patching latency," says Scott Shedd, security practice leader for consulting firm WGM Associates. "Report what we patched what remains unpatched and how many new vulnerabilities have been identified."
(Image: Freeimages.com)
Tracking the False Positive Reporting Rate (FPRR) can help put the work of lower-level analysts under the microscope, making sure that the judgments they're making on automatically filtered security event data is sifting out false positives from indicators of compromise before they escalate to others in the response team.
"Despite the implementation of automated filtering, the SOC team must make the final determination as to whether the events they are alerted to are real threats," Boison of Lockheed Martin says. "The reporting of false positives to incident handlers and higher-level management increases their already heavy workload and, if excessive, can de-motivate and cause decreased vigilance."
A high FPRR could indicate better training is needed from Level 1 Analysts or better tuning of analytics tools.
As CISOs try to find ways to prove ROI to higher ups and improve the overall effectiveness of security operations, the right metrics can make or break their efforts. Fortunately, infosec as an industry has matured to the point where many enterprising security leaders have found innovative and concrete measures to track performance and drive toward continual improvement. Dark Reading recently surveyed security practitioners and pundits to find out the best time-tested metrics to prove security effectiveness, ask for greater investment, and push security staff to improve their day-to-day work.
Average Time To Detect And Respond
Also referred to as mean time to know (MTTK), the average time to detect (ATD) measures the delta between an issue occurring—be it a compromise or a configuration gone wonky—and the security team figuring out there's a problem.
"By reducing ATD, Security Operations Center (SOC) personnel give themselves more time to assess the situation and decide upon the best course of action that will enable the enterprise to accomplish its mission while preventing damage to enterprise assets," says Greg Boison, director of cyber and homeland security at Lockheed Martin.
Meanwhile, the mean time to resolution or average time to respond, will measure how long it takes for the security team to appropriately respond to an issue and mitigate its risk.
"Average Time to Respond (ATTR) is a metric that tells SOC management and personnel whether or not they are meeting objectives to quickly and correctly respond to identified violations of the security policy," Boison says. "By reducing ATR, SOC personnel reduce the impact (including the cost) of security violations."
Tracking these two metrics continuously over time can show how well a security program is improving or deteriorating. Ideally they should be growing smaller over time.
(Image: Freeimages.com)
As CISOs try to find ways to prove ROI to higher ups and improve the overall effectiveness of security operations, the right metrics can make or break their efforts. Fortunately, infosec as an industry has matured to the point where many enterprising security leaders have found innovative and concrete measures to track performance and drive toward continual improvement. Dark Reading recently surveyed security practitioners and pundits to find out the best time-tested metrics to prove security effectiveness, ask for greater investment, and push security staff to improve their day-to-day work.
Average Time To Detect And Respond
Also referred to as mean time to know (MTTK), the average time to detect (ATD) measures the delta between an issue occurring—be it a compromise or a configuration gone wonky—and the security team figuring out there's a problem.
"By reducing ATD, Security Operations Center (SOC) personnel give themselves more time to assess the situation and decide upon the best course of action that will enable the enterprise to accomplish its mission while preventing damage to enterprise assets," says Greg Boison, director of cyber and homeland security at Lockheed Martin.
Meanwhile, the mean time to resolution or average time to respond, will measure how long it takes for the security team to appropriately respond to an issue and mitigate its risk.
"Average Time to Respond (ATTR) is a metric that tells SOC management and personnel whether or not they are meeting objectives to quickly and correctly respond to identified violations of the security policy," Boison says. "By reducing ATR, SOC personnel reduce the impact (including the cost) of security violations."
Tracking these two metrics continuously over time can show how well a security program is improving or deteriorating. Ideally they should be growing smaller over time.
(Image: Freeimages.com)
Just how effective is all of that "soft" spending on security awareness training? Steve Santorelli of Team Cymru says there are ways to track and measure that, primarily through phishing and social engineering stress testing, where you test you staff for phishing awareness and social engineering awareness.
Basically, you run a fake phishing campaign and make a few hoax calls," says Santorelli, director of analysis and outreach for the research firm. "Reward and publicize good results, help failures to learn from their errors, and you'll have folks actively watching out for these attacks--for a few weeks at least."
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024