Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

10:30 AM
Jeff Williams
Jeff Williams
Connect Directly
E-Mail vvv

Why Its Insane To Trust Static Analysis

If you care about achieving application security at scale, then your highest priority should be to move to tools that empower everyone, not just security experts.

In a previous blog, Jason Schmitt, the vice president and general manager of HP Fortify, promotes the static (Oops… status) quo by spreading some fear, uncertainty, and doubt about the newest type of application security tool known as Interactive Application Security Testing (IAST). Vendors selling static analysis tools for security have been overclaiming and under- delivering for over a decade. It’s time to stop misleading the public.

Jason seems to have reacted strongly to my observation that it’s a problem if you need security experts every time you want to run a security tool. What he doesn’t seem to understand is that this creates an expensive, wasteful, scale-killing bottleneck. Everyone who attempts to use static and dynamic analysis tools has a team of experts onboarding apps, tailoring scans, and triaging false positives.

[Read Jason’s opposing view in its entirety in The Common Core of Application Security.]

Of course we need experts, but they’re a scarce resource. We need them making thoughtful and strategic decisions, conducting threat modeling and security architecture efforts, and turning security policies into rules that can be enforced across the lifecycle by automation. Experts should be “coaches and toolsmiths” -- not babysitting tools and blocking development. Static tools, and other tools that aren’t continuous and require experts, simply don’t fit into my thinking about what I believe an application security program should look like.

In search of a unified security product
Automating application security is absolutely critical, so let’s talk about some of the things vendors won’t tell you about static analysis using, as an example, Contrast, an interactive application security testing (IAST) product from Contrast Security, where I am CTO.

Contrast is a single agent that provides SAST, DAST, IAST, and runtime application self-protection (RASP) capabilities. Contrast works from inside the running application so it has full access to all the context necessary to be both fast and accurate. It applies analysis techniques selectively. For example, runtime analysis is amazing at injection flaws because it can track real data through the running application. Static analysis is good for flaws that tend to manifest in a single line of code, like hardcoded passwords and weak random numbers. And dynamic analysis is fantastic at finding problems revealed in HTTP requests and responses, like HTTP parameter pollution, cache control problems, and authentication weaknesses. 

IAST uses all these techniques on the entire application, including libraries and frameworks, not just the custom code. So rather than deploying a mishmash of standalone SAST, DAST, WAF, and IDS/IPS tools, the combination of IAST and RASP in a single agent provides unified security from the earliest stage of development all the way through production.

Everyone knows SAST is inaccurate
If you care about the accuracy of security tools, you should check out the new OWASP Benchmark Project. The project is sponsored by DHS and has created a huge test suite to gauge the true effectiveness of all kinds of application security testing tools — over 21,000 test cases.

The Benchmark calculates an overall score for a tool based on both true positive rate and the false positive rate.  This project is doing some real science on the speed and accuracy these tools. Here are the results for a popular open source static analysis tool called FindBugs (with the Security Plugin). 

The Benchmark is designed to carefully test a huge number of variants of each vulnerability, to carefully measure the strengths and weaknesses of each tool. False positives are incredibly important, as each one takes time and expertise to track down. That’s why it is critically important to understand both true and false positive metrics when choosing a security tool.

The good news is that anyone can use the Benchmark to find out exactly what the strengths and weaknesses of their tools are. You don’t have to trust vendor claims. All you do is clone the OWASP Benchmark git repository, run your tool on it, and feed the report into the Benchmark’s scoring tool.

OWASP reports that the best static analysis tools score in the low 30’s (out of 100) against this benchmark. Dynamic analysis tools fared even more poorly. What jumps out is that static tools do very poorly on any type of vulnerability that involves data flow, particularly injection flaws. They do best on problems like weak random number generation that tend to be isolated to a single line of code.

SAST disrupts software development
There are two major problems with Jason’s criticisms of IAST performance. First, modern IAST is blazingly fast. But more importantly, you use IAST to find vulnerabilities during development and test, not production. So let’s talk about the metrics that matter during development and test.

Continuous, real-time feedback is a natural process fit for high-speed modern software development methodologies like Agile and DevOps. And since it’s a distributed approach, it works in parallel across all your applications.

On the other hand, static tools take hours or days to analyze a single application. And because you need experts, tons of RAM, and tons of CPU to analyze that single application, it’s very difficult to parallelize. I’ve seen this so many times before -- static ends up being massively disruptive to software development processes, resulting in the tool being shelved or used very rarely. When you get thousands of false alarms and each one takes between 10 minutes and several hours to track down, it’s impossible to dismiss the cost of inaccuracy.

SAST Coverage is an illusion
Static tools only see code they can follow, which is why modern frameworks are so difficult for them. Libraries and third-party components are too big to analyze statically, which results in numerous “lost sources” and “lost sinks” – toolspeak for “we have no idea what happened inside this library.” Static tools also silently quit analyzing when things get too complicated.

Try running a static analysis tool on an API, web service, or REST endpoint. The tool will run but it won’t find anything because it can’t understand the framework. And you’ll have no idea what code was and wasn’t analyzed. This false sense of security might be more dangerous than running no tool at all.

Unlike static tools that provide zero visibility into code coverage, with IAST you control exactly what code gets tested. You can simply use your normal development, test, and integration processes and your normal coverage tools. If you don’t have good test coverage, use a simple crawler or record a Selenium script to play on your CI server.

AppSec automation will empower everyone
If you care about achieving application security at scale, then your highest priority should be to move to tools that empower everyone, not just security experts. Check the OWASP Benchmark Project and find out the strengths and weaknesses of the tools you’re considering.

Whether legacy tool vendors like it or not, instrumentation will change application security forever. Consider what New Relic did to the performance market. Their agent uses instrumentation to measure performance highly accurately from within a running application. And it changed their industry from being dominated by tools for experts and PDF reports to one where everyone is empowered to do their own performance engineering. We can do the same for application security.

Related content:
What Do You Mean My Security Tools Don’t Work on APIs?!! by Jeff Williams
Software Security Is Hard But Not impossible by Jason Schmitt


A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Author
11/29/2015 | 4:11:11 PM
Re: SAST, DAST, and IAST all important testing technologies
I agree... SAST and DAST are too important to dismiss. We have to find a way to leverage the strength of each, at the same time work to continually evolve and raise the bar for these tools.  
User Rank: Apprentice
10/1/2015 | 8:02:51 AM
OWASP Benchmark official review at OWASP
I did a review of this project as part of the Project review team at OWASP. It has never been my impression that the OWASP Benchmark project has been promoted within the OWASP community as a 'ready' to use tool but rather as a tool in development stage. It is clear to me that the tool still needs a lot of testing and even so, it will not be able to 'benchmark' all the features of a SAST tool for example, if the tool being benchmarked in not able to produce a complete XML output reports with all its results.

More details about OWASP Benchmark Project review:


User Rank: Author
9/23/2015 | 12:34:36 PM
Re: What would a benchmark against Coverity show?
In reponse to a few private inquiries I want to make very clear that I think static analysis for many kinds of *quality* issues is fantastic.  FindBugs in particular has an excellent accuracy record for non-security bugs and it has helped me improve my code in the past.
User Rank: Author
9/23/2015 | 12:14:57 AM
Re: What would a benchmark against Coverity show?
The OWASP Benchmark Project supports for many different commercial and open source tools, including: 
  • Findbugs
  • HP Fortify
  • PMD
  • IBM AppScan
  • Veracode
  • CheckMarx
  • Synopsys Coverity
  • Parasoft
  • SonarQube

The picture in the article is FindBugs (security) but that's just one example of (pretty poor) static analysis capability.  The Benchmark also looks at many dynamic scanning tools.  The results are fascinating.

And as I mentioned, easily reproduceable.  If you have Coverity, just clone the benchmark project and run Coverity on it.  Then feed the results into the Benchmark scoring tool and get a report on exactly what you want to see.

I appreciate your optimism, but it's amazing what you find out when you actually measure.
Charlie Babcock
Charlie Babcock,
User Rank: Ninja
9/22/2015 | 9:09:07 PM
What would a benchmark against Coverity show?
Author Jeff Williams uses OWASP benchmark results against FindBugs to disparage the effectiveness of static analsysis of code, and I haven't heard much about FindBugs. I'd be more interested in what a benchmark against Coverity or one of the other more prominent static analysis tools might show. I suspect static analysis has done too much good for too long for it to be dismissed as easily as Willaims does, with his confidence in the brilliance of Contrast Security's interactive application security testing. Might they each excel at different things?
User Rank: Apprentice
9/22/2015 | 12:42:47 PM
SAST, DAST, and IAST all important testing technologies

IAST is a great testing technique that has some advantages that SAST and DAST do not have. But there are clearly strengths that SAST and DAST have that don't exist in IAST.  It's not time to throw away your SAST and DAST investment. A mature app sec program combines approaches to maximize strengths an minimize weaknesses.

SAST doesn't require a running system with test data and automated test suites.  This allows SAST to be used earlier in the dev cycle when it is least expensive to fix flaws.  DAST doesn't require modifying the production environment so you don't need find the server the app is running on, get the approval, schedule a change and contact an administrator to modify it. This allows DAST to be used more easily in production. Web apps can be scanned by just knowing the URL of the application. Finding the best way to combine techniqes will give you the best application security.


US Formally Attributes SolarWinds Attack to Russian Intelligence Agency
Jai Vijayan, Contributing Writer,  4/15/2021
Dependency Problems Increase for Open Source Components
Robert Lemos, Contributing Writer,  4/14/2021
FBI Operation Remotely Removes Web Shells From Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/14/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-04-20
A vulnerability of Helpcom could allow an unauthenticated attacker to execute arbitrary command. This vulnerability exists due to insufficient authentication validation.
PUBLISHED: 2021-04-20
vscode-restructuredtext before 146.0.0 contains an incorrect access control vulnerability, where a crafted project folder could execute arbitrary binaries via crafted workspace configuration.
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to an authenticated stored cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed....
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to multiple reflected cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed. Only...
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** AdTran Personal Phone Manager 10.8.1 software is vulnerable to an issue that allows for exfiltration of data over DNS. This could allow for exposed AdTran Personal Phone Manager web servers to be used as DNS redirectors to tunnel arbitrary data over DNS. NOTE: The aff...