A true code review involves both scanning and architectural risk analysis

Dark Reading Staff, Dark Reading

February 13, 2008

4 Min Read

Quick, which one of these statements is correct? Open source software is more secure than closed source. Proprietary software is more secure than open source.

The answer is neither one! Software is software, and security should play an essential role in every kind of software, not one flavor (crunchy organic granola) or another (mass produced waxy chocolate bars). And don’t get me started about “many eyeballs” or the economic reasons why proprietary software is likely to be better tested. In the end, I believe the big debate over whether open source is more or less secure is a red herring.

The real lesson behind all the hoo-hah is that both open source projects and gigantic proprietary software divisions can benefit from software security best practices. Remember the touchpoints? They’re useful in all kinds of software projects.

In March 2006, the U.S. Department of Homeland Security began sponsoring the scan project, an effort by Coverity and Stanford University to apply the Coverity code scanning engine to widely used open source projects looking for bugs. Lots of bugs have been uncovered and, more importantly, fixed. A flurry of recent tech press stories simultaneously declared a round of security victory and bemoaned defeat. So which is it?

First, the optimistic view. The code scanning battle is well under way, and we are winning! I am a big fan of code scanning and believe that use of static analysis tools should always be one of the basic security steps integrated into every SDLC.

There are a number of very good code scanning tools available commercially these days, including Coverity’s Prevent, Fortify’s Source Code Analysis, Klocwork, and Ounce Labs’s Ounce 5. These tools each have particular strengths, and we have used almost all of them in our code scanning work here at Cigital.

Open source projects like Coverity’s scan project, which sparked this article, and Fortify’s Java Open Review project are excellent examples of the way code scanning technology can be implemented in a way that can only help everybody. The FindBugs project is also worthy of mention, because it’s not only targeted at open source – it is open source.

Okay, now for the reality check. There are huge problems with the notion of declaring "security" after passing a code scan with an arbitrary tool and a random set of rules. Fortunately the code scanning vendors know this (well, at least their technical people do). Unfortunately, the press does not.

The most obvious issue is that security defects come in two flavors – implementation bugs found at the code level and architectural flaws found at the design level. Each of these accounts for roughly half of the defects in practice.

Code scanning tools can only find bugs. Can a code scanning tool determine that no user authentication was performed? How about whether or not a playback attack will work? (Just for the record, the answer is “no way” in both cases.)

Another obvious problem is that the list of rules enforced by a static analysis engine can never be complete. The idea of trying to write down a list of all possible security bugs that could ever occur in any language (and then cramming them into a code scanner) is just plain silly. The set is infinite. Still, using a list of known problems (even if it’s incomplete) is a great idea, so code scanners have their place in the software security toolkit.

In the end, passing a code review is an indicator. I liken it to taking a patient's temperature. It's great first start; it's easy to do; and it happens all the time – but it's not the world’s best diagnosis tool. If the "temperature" is way out of bounds, you should seek medical attention for your code. But you also can die of some injuries without ever running a temperature.

Architectural risk analysis (sometimes called ARA or "threat modeling") is, like code scanning, an essential software security best practice. ARA is useful for finding flaws. A reasonable approach to software security covers both bugs and flaws. We can't ignore the architecture.

To their credit, the good people at Coverity are very careful about how they position scan project results. They say things like, “Eleven diligent projects which had resolved all of the defects identified at Rung 1 are the first projects to be upgraded to Rung 2. Those projects are Amanda, NTP, OpenPAM, OpenVPN, Overdose, Perl, PHP, Postfix, Python, Samba, and TCL.”

No overblown security claims there, just reality – and some solid results. Good for them. I hope they keep it up.

If you’re interested in static analysis for security (and you should be), buy and read Secure Programming with Static Analysis by Brian Chess and Jacob West.

— Gary McGraw is CTO of Cigital Inc. Special to Dark Reading

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights