Shellshock’s Cumulative Risk One Year Later
How long does it take to patch an entire distribution and bring it up to date? Longer than you think.
When Windows XP reached its end of life, there were approximately 257 outstanding security patches required to bring the OS up to date -- and this was considering cumulative versions for both commercial and business, as well as patch supercedence. Since the source code is closed, it never suffered from the fragmentation issues affecting many vulnerabilities, such as what we saw last year with the now infamous Shellshock -- impacting a half-billion Web servers and other Internet-connected devices.
On September 24, 2014, when the bug was first disclosed, open source, Linux, OS X, embedded systems, and Unix all were affected and in total, versions available from 1994 (version 1.14) to 2014 could be exploited due to this GNU bash shell vulnerability. But in the intervening 12 months, a lot more needed to be done than just patching a single platform and bringing it up to date. That in itself is something many organizations still find difficult to do today.
Why are these facts important on the one year anniversary of Shellshock? Over time, it’s become more difficult to properly perform a thorough vulnerability assessment of all the vulnerabilities that have been found in the past, many of which are still applicable. However, the biggest issue is the cumulative problem of flaws which are affecting systems. Information security teams must remember that Shellshock and other flaws are not gone even though they’ve disappeared from media headlines. Some systems, due to age, end of life, or a vendor’s incompetence, still remain unpatched today.
While mitigating controls may help reduce or eliminate the risk, the fact remains that any newly identified vulnerabilities aggravate the problem and increase the total vulnerability count with each new flaw. Shellshock, as bad as it was and may be still to this day, is just another critical vulnerability in the process and systems that are not being patched, or even assessed. This creates a cumulative risk problem, which could allow an exploit through multiple vectors versus just one.
While this may seem like common sense, it represents an interesting problem: How long does it take to patch an entire distribution and bring it up to date? Depending how many missing patches exist, more time is needed to remediate and with each new finding the snowballing effect of risk grows. This is true for any operating system and application, even when a single cumulative security update is available; the more you have to patch, the longer it will take, and the more it costs to do so.
So how can infosec teams reduce the cumulative side effects from new vulnerabilities? Here are five suggestions:
Ensure that your organization has a vulnerability assessment and patch remediation process to identify risks. This helps make sure you’re patching quickly. Once a flaw is found, regardless of how, it can be closed in a timely, non-cumulative fashion.
When selecting technology vendors, verify that service level agreements include security and maintenance patches, as well as end of life dates for operating systems, embedded devices or applications. This will ensure that as solutions are being deployed, they can be remediated for the life expectancy of the implementation.
Vulnerability assessments themselves have evolved greatly from the days of network scanners. Many security tools from IDS/IPS systems, endpoint agents, next-generation firewalls, sniffers, etc., can detect vulnerabilities that are dormant or being actively tested in a hostile environment. Do not rely on just one technology to identify risks and missing security patches. Leverage every single one and correlate the information to identify weaknesses and attacks.
Credential access is always a problem with sensitive hosts and when un-hardening is just not permissible. When local agents are not permitted to do the work (patch management, Windows update, or even vulnerability assessment agents), consider having a cloned isolated lab environment where remote access is permitted (un-harden duplicate hosts) to perform assessments. The results can then be applied to production. This works well with cloning features present in many virtualization technologies.
When network assessment technologies are being deployed, work with the network infrastructure team to logically place scanners as electronically close to targets (not over WAN links) as possible. Avoid scanning through firewalls, other security sensors, and whitelist the scanner only when needed. They should be close enough to each subnet that they have unrestricted access to each target they are responsible for and every port. This includes whether they are hosted in the cloud, as a virtual image, or even a physical scanner in a remote country.
When was the last time you checked under the hood for the cumulative impact of Shellshock or other vulnerabilities? Let’s chat about that in the comments.
About the Author
You May Also Like