There are several key market forces affecting the cyber landscape that regularly make the headlines: a shortage of security personnel, a huge rise in the number of security tools, and a growing attack surface due to the move to bring-your-own-device policies and the cloud. However, another market force is changing the nature of the industry: increasing pressure to adhere to numerous regulations such as the General Data Protection Regulation, the SHIELD Act, the California Consumer Privacy Act, and the more-recent MAS cyber hygiene notices.
Auditors and regulators expect us to show that reasonable security measures are in place to protect customers' personal data and business-critical applications, at any point in time. And this is where we struggle — to demonstrate that due care was taken. The trend we see is that organizations are investing in a lot of tools to manage risks. This is shown by a recent study, conducted by Forrester Research, which surveyed more than 250 senior security decision-makers in North America and Europe.
The report outlined that organizations are using multiple technologies to identify and mitigate risk, including security analytics platforms; vulnerability management; governance, risk, and compliance platforms; and vendor risk management platforms. But multiple tools can compound the issues around reporting — reports must be collated and organized manually, taking the team away from "doing security" and reducing the likely frequency of report updates, which means stakeholders do not have one version of the truth.
To alleviate the disconnect, as a sector we recognize that we need to move to continuous and accurate cyber-risk reporting, which is by fueled automated data collection and collation. The starting point for this is an agreement on what security metrics should be measured and how. There are several practical principles that we can use to make metrics more business-focused, accurate, and measurable as we move into an era where accuracy and relevance are king.
The starting point is an agreement on which questions need to be answered to make the business more secure and what data is available to help inform the answers. The metrics must be able to stand up to scrutiny. We also need to make sure we know what to do with an answer to the original question. I liken it to The Hitchhiker's Guide to the Galaxy — if I told you the answer to the meaning of life, the universe, and everything was 42, what would you do with that information? If we don't know what to do with any given metric, then we need to go back to the beginning.
The next practical principle is to always aim for simplicity. A complex metric, one that is hard to interpret, may be less effective than a couple of simple ones! If the audience for a metric doesn't get the message it’s intended to convey, the metric has failed no matter how "smart" it might be. Simple stats that are well-executed and easy to explain win over black-box analyses every day of the week. And don't forget we need to add business context — business-focused metrics resonate with the board and business stakeholders as they enable them to drive action.
How many metrics do we need? An effective approach is to align metrics to industry-accepted security frameworks. Aligning to a framework gives an indication of how well a metrics program covers the breadth of security areas and if there's any gaps that need filling. Frameworks can help provide a familiar structure for a metrics program and naturally provide higher levels at which we can summarize analysis and provide an effective overview for business stakeholders.
Now it's time to collect data and build metrics. A high-quality inventory is the foundation for trusted metrics. Try to combine multiple datasets to get the most complete and accurate picture of assets possible and classify them as accurately as possible, asking questions such as: Is this server Internet-facing? Does this database support a critical app? Which business line owns this? This enables metrics to have that all-important business context and helps with prioritization. Being able to show metrics for the infrastructure supporting business critical applications is invaluable to get buy-in from the business.
Also, it's key to verify rather than trust. We don't want to add inaccuracies into metrics by assuming we know some of the facts already — e.g., "they told me antivirus was deployed on all my devices." And if we can't measure something, it shouldn't be in our metrics program! Bear in mind, of course, there are more or less accurate ways to measure — an approximate measurement is fine as a starting point, but a guess is not.
Once we have verified, we need to verify again. Use the type of metric to assess an ideal frequency and then measure as close to that as is feasible for the organization — for example, if the vulnerability scanner is run once a week, we don't need to update/verify data and create metrics on these daily.
Finally, never forget that whether the metrics are for the board, a business line, regulator, or auditor, the key is also knowing the accuracy, timeliness, and the limitations of the measurements. A good illustration is patching time on our servers. We need to make sure we know the percentage of servers that aren't covered by our scanner. After all, "90% server vulnerabilities fixed within service-level agreement" becomes decidedly less impressive if we know that only 50% of servers are being scanned.
The key takeaway here is that a proactive approach to cybersecurity requires the right tools, not more tools — just as a metrics program is much more effective with simple, accurate metrics rather than a host of numbers that may be wrong, as well as out of date.
Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Gamification Is Adding a Spoonful of Sugar to Security Training."