It seems earthshaking vulnerabilities are released weekly that leave vendors and system administrators scrambling to remediate. So, where are all these vulnerabilities coming from? A simple search on the National Vulnerability Database shows over 3,300 new vulnerabilities released in just the past three months. Granted that many of these vulnerabilities are esoteric and limited to specific niche applications. However, nearly every other month we see a release with a large-scale hole that affects millions. The most egregious example was Heartbleed, which affected nearly halfof all Internet web servers.
But, why so many and so often? The simple reason is that vulnerabilities are an emergent property of software, and there are three major causes: code quality, complexity, and trusted data inputs.
This is where everyone points their finger first. But why? Sloppy programming? Not necessarily. More often than not, it’s a conscious choice. In most development teams, the highest priority is given to the features for which customers will pay. And outside of the security group, most people do not want to pay for security. I say most because there are those who will pay for it, most often for applications and systems that are not as useful or flexible as the mainstream, less secure products where they should be paying for security.
Another driving force working against code quality is the concept of minimum viable product, which means it has just enough features and value to gain traction with customers. Any other features are secondary and can be added later. The mantra is: never build a mansion when a tent will do. The problem is that we find ourselves living in a tent for years on end. We also know that fixing security programs down the road is more expensive, which also delays the addition of security features in the face of new customer (and market) demands. Often, it isn’t until after a series of security calamities occur that security is raised to a priority.
Most modern applications are so complex that they are beyond the understanding of a single person. To the average user, all this complexity is hidden by the user interface and underlying infrastructure, but IT professionals know better. Consider the current version of the Firefox browser, which contains 16 million lines of code written by 5,094 developers over ten years.
If you consider all the moving parts, interdependencies, layers, libraries, interface modes, and backward compatibility built into these applications, it’s no wonder that there are serious gaps in security coverage. It is also widely known that dynamic and complex systems are hard to predict and can lead to unexpected outcomes. One thing is certain, though: large, complex software applications will contain bugs, and some of those bugs will be security vulnerabilities.
Overly Trusting Data Inputs
If you examine most security vulnerabilities, you will see that they occur where the program is accepting data input. Therefore, every data input into a system is an attack surface. These vulnerabilities exploit weak boundaries where input systems expect data but instead are breached to insert new commands. Look at where attacks such as buffer overflows, SQL injection, or cross-site scripting occur: data input channels that are subverted. This is not a new problem. Decades ago, programmers were taught to expect non-conformant input and filter accordingly. Given the complexity of software and the speed at which it is developed, it is not surprising that programmers do not have the resources or time to ensure robust filtering of every possible input stream.
Pulling It All Together
In "How Complex Systems Fail,” author Richard I. Cook notes that "catastrophic failure occurs when small, innocuous failures join to create a systemic problem." These problems combine to create the chronic disease of security vulnerabilities pervading the entire software industry.
How can security teams respond to these issues? For one, organizations can use these principles to roughly estimate the magnitude and frequency of potential vulnerabilities in a system, which can also assist in risk assessments. Since every input is a possible attack path, reduce your exposure to just the services you absolutely need to put on the Internet. If you do expose an input path, filter it and monitor it. Also, remember that security tools are software, so build for defense in depth, and test often.Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An ... View Full Bio