Drag Your Adolescent Incident-Response Program Into AdulthoodDrag Your Adolescent Incident-Response Program Into Adulthood
It’s not about how many tools you have, but what you can do with them.
November 4, 2014
We are often the incident response (IR) team called in to put out the fire after the security breaches you see on the news, seemingly every week. We see the facts, flaws, and foibles of these crises, and afterward we help the organization pick up the pieces and assemble a better approach to prevent a next-time. We usually find that fundamentals have been missed. Although the virtual front door was padlocked with an ornate authentication system, the attacker hopped in a virtual back window by stealing legitimate credentials from an insecure application with a basic flaw. Or one phish got away from the anti-spam filter. Or an unmanaged asset missed a patch.
As you watch the dramas unfold, are you wondering how good your own incident response plan is? Are you taking comfort in the range of security technologies you have installed? Are you reassuring yourself and your peers that your organization would not commit the same fundamental errors that seem to be at the root of most of these breaches? But how can you be sure?
Many of the organizations we meet with have IR plans, often expensively developed by third-party consultants. The plans are usually evaluated against theoretical scenarios but not rehearsed and real-world tested. In addition, these companies frequently have an overabundance of security technology but fail to recognize and address the root causes of their vulnerabilities and breaches. In the absence of solid metrics to measure the results of their efforts, it can be difficult to justify the investments required for their security needs, leaving some aspects of the security program underdeveloped because of lack of budget approval.
One of the IR evaluation tools we have been using with customers is a variation on the capability-maturity model concept, which dates back to the 1970s. Maturity does not refer to the age of the model, but to the level of formality and optimization of the procedures. A security architecture or framework can be prescriptive in what processes and technologies you need, but too generic to be a perfect fit for anyone.
A maturity model, instead, defines three or four levels of increasing capabilities and metrics, across different areas of responsibility. In enterprise security, for example, you might call the levels reactive (Level 0), compliant, proactive, and optimized (Level 3), covering areas such as metrics, user awareness, infrastructure, applications, incident response, and strategy.
Using a model like the one above, you evaluate your maturity level in each area and identify the processes and technologies you need to adjust or invest in to get you to the level that matches your organization’s specific acceptable level of risk.
Let’s look at the recent BASH bug discovery, which is a vulnerability in a widely deployed Unix command-line shell. The response to this incident required identifying all of the vulnerable devices, ranking them by level of exposure, getting the appropriate patches, and applying the patches.
A reactive security group has an ad hoc approach to this. Upon getting the updated threat intelligence (possibly from a general news report), the group works to assemble a team, search for potentially vulnerable systems throughout their network, identify the owners, and then painstakingly assess the software version levels of each, upgrading and patching as they can. However, this leaves the company at risk if they don’t find all of the vulnerable systems.
A compliant security group has an annual asset list to start from, but the list needs to be updated and does not contain sufficient details on the configurations to positively classify the exposure level. So, this group is also going system by system to evaluate and patch and faces the risk of not finding every vulnerable system.
A proactive security group has an asset list that is updated quarterly, with configuration management for each machine. This client is starting to engage in actual risk management. The proactive company would likely miss far fewer systems, and prioritizes its efforts on critical systems first. The IR team quickly ranks the systems by exposure level, remotely patching those they can and scheduling the rest for manual updates.
An optimized security group may be barely affected by the threat. It has current asset information and live vulnerability scanning. Its patch management system updates the most exposed systems as the security patches become available, while the environment is protected by defense-in-depth countermeasures. The IR team is able to continue with its normal work processes of dealing with incidents.
When we begin the conversation about maturity, most companies are unaware of their posture and are often surprised by the assessment results. We’d say the bulk of companies get at best a C grade, with only the exceptional passing with a few A’s in specific areas. C is for Compliant, which is not sufficient for security these days. Companies with high-sensitivity or regulated data should aim to be at least proactive, with a stretch goal of optimized within two to three years.
About the Author(s)
Hacking Your Digital Identity: How Cybercriminals Can and Will Get Around Your Authentication MethodsOct 26, 2023
Modern Supply Chain Security: Integrated, Interconnected, and Context-DrivenNov 06, 2023
How to Combat the Latest Cloud Security ThreatsNov 06, 2023
Reducing Cyber Risk in Enterprise Email Systems: It's Not Just Spam and PhishingNov 01, 2023
SecOps & DevSecOps in the CloudNov 06, 2023
Passwords Are Passe: Next Gen Authentication Addresses Today's Threats
How to Use Threat Intelligence to Mitigate Third-Party Risk
Concerns Mount Over Ransomware, Zero-Day Bugs, and AI-Enabled Malware
Everything You Need to Know About DNS Attacks
Securing the Remote Worker: How to Mitigate Off-Site Cyberattacks