Security must be woven throughout the software development process. Testing tools are helping make that happen.

Dark Reading Staff, Dark Reading

November 25, 2009

8 Min Read

Software testing comes in many flavors. Unit testing analyzes individual components before they're integrated into larger systems. System and integration testing checks that modules work together. Regression testing verifies that everything still works after a change is made to the code. And security testing checks that data is protected.

Tools such as source-code scanners, security-aware compilers, and application scanners help developers find vulnerabilities in code. And techniques like fuzz testing uncover inputs that can cause apps to behave badly. With fuzzing, we deliberately attack software with random data in search of weaknesses and unexpected responses.

Fuzz testing is particularly important in Web application development, and it's playing a growing role in ensuring that "security quality"--the confidentiality, integrity, and availability of systems and data for users--is integrated into every phase of development.

Fuzz testing finds vulnerabilities that are too complicated for a person to see. With complex programs--Windows 7 has an estimated 50 million lines of code, for instance--it's difficult to anticipate everything that can go wrong. Fuzzing takes a "big hammer" approach to systems, hitting them persistently with strange inputs.

Fuzzers also save companies money because they don't require much human intervention. Microsoft fuzzes its apps until end of life because it's so cheap to do. Another advantage of fuzz testing is its simplicity. Tests that target specific vulnerabilities can quickly be implemented. Fuzzing is credited recently with finding vulnerabilities in the Windows 7, iPhone, and Android communications protocols.

Early Fuzzing

My first fuzzing experience was in the late 1980s when soda vending machines appeared in the Bahamas, where I grew up. But the machines had signs on them saying "U.S. Quarters Only." The Bahamian currency is pegged one-to-one with the U.S. dollar, and both currencies are accepted everywhere. When my friends and I came across our first soda machine, none of us had a U.S. quarter, so we started experimenting. One of us put a washer in the slot. It dropped right through. We tried a U.S. dime. Nothing. A Bahamian quarter. Still nothing. The outlook for soda was bleak until we dropped a Bahamian 10-cent piece in.

To our amazement, the display instantly registered "25"--the machine mistook the 10-cent coin for a U.S. quarter. We scrambled to find three more 10-cent pieces. The machine read "50," "75," and finally "1.00." We hit Select and got a soda. Years later, I learned that the vending machine measured diameter and weight to determine if an object dropped into the slot was a U.S. quarter. Bahamian 10-cent pieces coincidently have almost exactly the same diameter and weight as U.S. quarters.

Without understanding the mechanics of the machine, we had applied semirandom inputs and hoped for something interesting to happen. What we were doing is now known as "fuzzing"--applying inputs with some degree of randomness, then looking for unexpected outputs. In theory, it sounds straightforward, but in practice, it can be difficult to do well.

How Fuzzing Works

There are many freely available fuzzers, such as Peach Fuzzer from Eddington and Frantz, Spike from Immunity, and MiniFuzz from Microsoft, as well as commercial fuzzers such as Codenomicon from Codenomicon Defensics and Beyond Security's BeStorm.

Web application fuzzers are by far the most mature. They understand how an application is supposed to respond to normal input, then they either look for deviations or symptoms of failure. When testing for SQL Injection vulnerabilities, for instance, an input might be a string of text with a single quote mark. If a database error message is returned, the tester knows there's a potential vulnerability.

For non-Web software, fuzzing is murkier. These fuzzers work by corrupting files, network traffic, or API parameters by doing the following:

  1. They generate a random input (or randomly corrupt an existing input).

  2. Deliver that input to the app.

  3. Monitor it for an exception or crash.

This approach finds problems such as buffer overflows, a symptom of which is usually a crash. If you're looking for other, noncrashing vulnerabilities, fuzzing can still be useful if it can be automated.

Back to my soda machine, we were looking for one thing--the red display showing the money we put in had registered. In retrospect, there were other outcomes that would have been interesting: The machine might have crashed, short-circuited, or become hopelessly jammed. If you can specify the symptoms you're interested in, fuzzing can be an incredibly cheap way to see how a system responds to a large set of inputs. The problem is that it can miss more subtle architectural problems.

Beyond The Requirements

Most security vulnerabilities aren't requirements violations; rather, they come from incomplete requirements. For example, a simple requirement takes the form of "when a user applies input x, the software should produce output y." It's easy to test such requirements: Apply input x, then look for y.

Verifying requirements is critical, but security testing looks beyond intended behaviors set out in requirements. Behind those may be a lot of unspecified assumptions and constraints. For example, if the input or output is sensitive (say, a credit card number), storing it even temporarily in an insecure place is a bad idea. Developers may overlook this and make design and implementation choices that leave the software at risk. To instill security quality into a project, testers use fuzzing and similar techniques to look beyond stated requirements and get answers to questions like:

>> What isn't the system supposed to do?

>> How should inputs, functionality, and data be restricted?

>> Are security features correct and is functional code secure?

Vulnerabilities can be introduced at any stage of development. A key security requirement might be poorly defined or missing. An architectural weakness could get introduced during design. Developers may use a vulnerable function to process user input. Security testing tools can check for these vulnerabilities, but there are limitations.

Over the past few years, several classes of security testing tools have emerged. Essentially, these tools, some of which incorporate fuzz testing, have taken over the security aspects of testing. While they tend to find certain classes of vulnerabilities, they miss many others. They aren't substitutes for security testing, but instead, like a pair of pliers, they extend your reach when testing software. Bottom line, it's people thinking creatively about how to misuse or abuse software who drive security testing.

For example, a source-code scanner or security-aware compiler can look for functions and constructs that commonly result in vulnerabilities. They can alert you that the code may have a weakness even when the syntax is essentially correct. While these tools can be useful during development and testing, they see only part of the picture.

Most applications are highly interconnected and have lots of interaction with other software. Statically scanning the code of one application or module doesn't give a complete view of how a running application will respond to hostile input. Some dynamic security tools exist to help bridge the gap. Application scanners primarily focusing on Web apps create input strings that can reveal potential vulnerabilities; they also look for symptoms of failure. These tools can be good at finding low-hanging fruit and tend to focus on some of the more common vulnerabilities.

Architectural Vulnerabilities

We were about 13 sodas into our discount scheme when we discovered another design choice that would make the soda machine flaw far worse--when the machine ran out of sodas and a little red "Empty" light came on. Not willing to take a loss on the 40 cents we'd just invested, we pressed the "Coin Return" button. We assumed the machine would give us back the four Bahamian 10-cent pieces. However, pressing the button returned four U.S. quarters--a 150% return on our investment! This was a definite upgrade over free sodas.

On its own, the coin-return design was likely a good one based on mechanics and convenience. The developer relied on the assumption that the rest of the system worked as intended, including that it correctly identified objects as quarters. The coin return design choice by itself wasn't a security vulnerability, but it severely increased the impact of the machine's existing problem, mixing up U.S. quarters and Bahamian 10-cent pieces. To have a good approach to systems security, you have to practice the principle of defense-in-depth, which forces design choices that create a safety net around potential problems. Problems like this speak to the need of a more holistic approach to security testing.

Software security quality should be woven throughout the development process. It begins in requirements and design, is propagated through development and testing, and continues into deployment and support. The good news is that there are lots of resources. Microsoft has made many of its processes and tools available from its Security Development Lifecycle, a process that provides security and privacy throughout the development process. Likewise, SAFECode is synthesizing and distributing security best practices from some of the world's largest software makers.

The various secure development methodologies have some common themes, with the need for education perhaps being the biggest. The term "education" isn't exactly right--what's really needed is re-education so developers understand that security isn't a natural outcome of traditional quality-assurance processes. Building secure software takes focused work and a different mind-set.

In a few years, if all goes well, perhaps we'll be beyond worrying about security quality because security will be woven into software in the same way that reliability is. But until that happens, I still have a few more sodas to buy.

Herbert H. Thompson is chief security strategist for People Security, a security education company. He also teaches software security at Columbia University in New York.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights