If you could design a silver bullet to help win the fight against the cyberthreat, what would you do? My silver bullet idea is always the creation of a "magic button" that automatically patches every operating system, plug-in and application in an environment via a method that does not break our business processes, applications, databases and has no customer impact.
I am not delusional; I did say it has to be a "magic button." But if you had any doubt about the magic part, the recent Cisco 2015 Annual Security Report confirms that in both large and small environments, security teams are not able to keep up with patching. It is shocking that less than 40 percent of companies have an organized effort for patching and configuration management.
The study also states that 90 percent of companies are "confident about their security policies, processes, and procedures." These findings expose either a lack of appreciation -- at the executive level -- of how important patching is for hardening an environment against attack, or a disconnect with security operations mangers who battle day-to-day attacks. The confidence levels vary only slightly between verticals, geographies and job titles.
Threat actors are only after two percent of your network. They use the other 98 percent to exploit that two percent. When you view the challenging patching problem through this lens, it becomes obvious that not every machine or patch should be treated with an equal sense of urgency.
I have been accused of having Obsessive Compulsive Disorder, mostly by my spouse and the FireHost security team. However, when building a winning patching strategy, I believe being "Obsessive Compulsive" is a desirable disorder characteristic. The first step is to organize your environment, which is the hardest part of the process. However, the easiest population to address first in your patching strategy is the user hosts. Three specific steps to take include:
- Categorize: Move all of your user hosts to VLANs and OUs that categorize them by machine type, OS, plug-in requirements and, if you can, by importance to the business.
- Segment: Will user machines — that likely have older applications running on them — break if the OS is patched? These user hosts may need human intervention as well, or maybe a lag in patching. Segment them away from anything you really don't want breached.
- Automate: Win back valuable time by automating the heck out of user-machine patching so you can focus your human intervention on servers.
The next population to address are your servers, which are similar to the host-level problem. First, organize your server environment into bucket categories: "must patch," "should patch," and "can't patch." Understand you will almost always have high human intervention when patching any server. If you organize your server farms into segments of like OSes and applications, some of the process can be automated.
The obvious servers you should first address in your infrastructure are workloads (e.g., applications and databases) that manage regulated data, such as PCI or electronic medical records. These are easy to identify. If you feel challenged to do this, you may have a bigger problem in that you may never be compliant. The servers that manage regulated data must be the top priority for your patching efforts and must not lag.
Properly segmenting these servers from the rest of your environment, whether via strong internal ACLs or by moving them to a third party service provider who focuses on security and compliance, will result in their being at a much lower risk of being compromised from other servers that may become compromised themselves. Another benefit is that this segmented data would be out of the scope of a forensic investigation involving other compromised servers.
Other "must patch" servers center around those that manage workloads that would devastate the business if breached. Some examples of these servers include financial systems, HR workloads and development labs where critical intellectual property is created.
Threat actors are really only after two percent of your environment. The must-patch group is likely half of that two percent. So, what is the other one percent? Think about the threat actor kill chain and what threat actors do to gain persistence, escalate privileges and move laterally. While this 1 percent is not the final target, they are critical systems the threat actors need to achieve their objectives.
At FireHost, we refer to this as "key terrain." Some examples of this are Active Directory servers, patch management systems (ironic, I know), software distributions systems and any other system that allows a threat actor to escalate privileges, create accounts and spread laterally. It's best practice to scan and patch these systems weekly, making them a hard target.
On the tail end of this bucket of "should patch" is everything else that you can patch. The residual from this group should be servers you need to do business, but have workloads that are not an existential threat to the business if breached. Understand that you may not get to all of the servers in this group in a timely manner.
Yes, at the end of this organizational effort, you may be surprised you have systems that can't be patched. Some examples are legacy databases that used to run on mainframes and critical infrastructure/SCADA. A really bad situation? If you end up with systems that can't be patched that house or interact with regulated data. That will not allow you to achieve compliance and puts your business at risk. If your analysis comes to this conclusion, this becomes a business risk that must be addressed, usually with a significant investment.
The focus of a sound security strategy is to make the surface attack so small that it drives up the skill level of the attacker to exploit the environment. The cornerstone to this strategy is an organized effort for patching vulnerabilities. This is not the sexy technology that security folks like to talk about post-breach, but this truly is the best way to have a significant impact on your security posture.