My recent blog on incident response program maturity sparked some fundamental questions about maturity models as a concept. Let’s use this blog to establish what we are talking about in terms of maturity, audit versus assessment, etc.
The intent to understand a client’s maturity does not demand what we would call an audit. That implies there is a checklist, and the client could pass or fail. An audit also implies the behavior of “managing to the audit” versus “becoming more secure” (like the grade inflation we have experienced in US schools). We are suggesting an assessment of current state against a backdrop of maturity and capability. (Take another look at the table.)
Maturity is loosely tied to CMMI (Capability Maturity Model Integration) in the sense that it has been an industry-accepted term/framework for some time. It is intuitive to think about the current state of security maturity and capability in terms of “reactive, compliant, proactive, optimizing,” but you could really use any version of this to achieve what we are suggesting. I have seen other maturity models that reference levels of capability versus state (i.e., no capability, some capability, etc.).
What we are trying to do is define the activities we see for the various maturity levels we have defined. As an example, look at incident-response (IR) capability. We have many clients that pull their IT teams to do IR when things go sideways. Validating our experience, a McAfee-sponsored SANS survey on incident response said that 61% of respondents draw in additional assistance from their internal IT staff to address their IR surge needs.
- IT can’t do their day job while focusing on the incident.
- They do not have the skill or the tools to follow basics such as sound forensic process or chain of custody.
- Containment takes longer, and the ability to pursue responsible parties is likely lost.
- What level of effectiveness is the remediation defined by a team where, again, this is not their expertise? (If they didn’t understand the root cause, the incident could just keep repeating.)
- A breach or incident is not really an IT problem. It requires specific skills and capabilities to detect and respond effectively.
The above state implies a low level of maturity in terms of IR as a means to affect the impact component of the risk equation (Risk = Threat x Vulnerabilty x Impact). If you know the current state, and you help the client define the desired state, it becomes a simple roadmap of steps to get from current to desired.
You could extrapolate this concept to the proposed areas of security that were highlighted (strategy, infrastructure, application security, IR, awareness, metrics) to create a more meaningful, business-driven conversation that somewhat obfuscates all of the security industry specific “nuts and bolts” that have created the perception gap in the first place.
Over and over, I see clients that are spending a lot of money doing a lot of “security stuff,” which creates a perception of “we are doing stuff – we are getting more secure.” This is not necessarily the case, and may dangerously create a false sense of security. If we define that state and progress in terms of critical controls and common criteria, such as ISO, NIST, and COBIT, it’s not a language the business will ever grasp, leaving it unable to draw any conclusions of how dollars spent equals better security (especially since there is no direct correlation in the absence of any real metrics – another typical gap).
Maturity models translate actions into goals that even non-security people can grasp.
I’ll admit this approach is still subjective, based on “expert opinion,” but to me it’s more real than “risk scores” that create pseudo specificity which is still based on “expert opinion,” and it actually makes more sense and illustrates how it’s difficult to go from maturity 0 to 3 by just deploying SIEM. It doesn’t reduce the complexity of the problem, or even the solution, but speaking in terms of maturity, it’s common ground for the business and the geeks. I would also say it reduces some of the chaos of how you represent current versus desired state. This creates a quantifiable set of rational goals versus the “let’s lose some weight” (or “buy and deploy some stuff”) approach we often see.
Additionally, the areas of security listed in the table (strategy, infrastructure, application security, IR, awareness, metrics) are the distilled result of seeing incidents, their root causes, and alarming assessment results first-hand. It’s not exhaustive, nor is it an attempt to replace any other framework. It’s just a basic mental checklist to “plumb line” your current program to make sure what’s there at least covers the basics.