Vulnerabilities / Threats

9/22/2015
10:30 AM
Jeff Williams
Jeff Williams
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail vvv
50%
50%

Why Its Insane To Trust Static Analysis

If you care about achieving application security at scale, then your highest priority should be to move to tools that empower everyone, not just security experts.

In a previous blog, Jason Schmitt, the vice president and general manager of HP Fortify, promotes the static (Oops… status) quo by spreading some fear, uncertainty, and doubt about the newest type of application security tool known as Interactive Application Security Testing (IAST). Vendors selling static analysis tools for security have been overclaiming and under- delivering for over a decade. It’s time to stop misleading the public.

Jason seems to have reacted strongly to my observation that it’s a problem if you need security experts every time you want to run a security tool. What he doesn’t seem to understand is that this creates an expensive, wasteful, scale-killing bottleneck. Everyone who attempts to use static and dynamic analysis tools has a team of experts onboarding apps, tailoring scans, and triaging false positives.

[Read Jason’s opposing view in its entirety in The Common Core of Application Security.]

Of course we need experts, but they’re a scarce resource. We need them making thoughtful and strategic decisions, conducting threat modeling and security architecture efforts, and turning security policies into rules that can be enforced across the lifecycle by automation. Experts should be “coaches and toolsmiths” -- not babysitting tools and blocking development. Static tools, and other tools that aren’t continuous and require experts, simply don’t fit into my thinking about what I believe an application security program should look like.

In search of a unified security product
Automating application security is absolutely critical, so let’s talk about some of the things vendors won’t tell you about static analysis using, as an example, Contrast, an interactive application security testing (IAST) product from Contrast Security, where I am CTO.

Contrast is a single agent that provides SAST, DAST, IAST, and runtime application self-protection (RASP) capabilities. Contrast works from inside the running application so it has full access to all the context necessary to be both fast and accurate. It applies analysis techniques selectively. For example, runtime analysis is amazing at injection flaws because it can track real data through the running application. Static analysis is good for flaws that tend to manifest in a single line of code, like hardcoded passwords and weak random numbers. And dynamic analysis is fantastic at finding problems revealed in HTTP requests and responses, like HTTP parameter pollution, cache control problems, and authentication weaknesses. 

IAST uses all these techniques on the entire application, including libraries and frameworks, not just the custom code. So rather than deploying a mishmash of standalone SAST, DAST, WAF, and IDS/IPS tools, the combination of IAST and RASP in a single agent provides unified security from the earliest stage of development all the way through production.

Everyone knows SAST is inaccurate
If you care about the accuracy of security tools, you should check out the new OWASP Benchmark Project. The project is sponsored by DHS and has created a huge test suite to gauge the true effectiveness of all kinds of application security testing tools — over 21,000 test cases.

The Benchmark calculates an overall score for a tool based on both true positive rate and the false positive rate.  This project is doing some real science on the speed and accuracy these tools. Here are the results for a popular open source static analysis tool called FindBugs (with the Security Plugin). 

The Benchmark is designed to carefully test a huge number of variants of each vulnerability, to carefully measure the strengths and weaknesses of each tool. False positives are incredibly important, as each one takes time and expertise to track down. That’s why it is critically important to understand both true and false positive metrics when choosing a security tool.

The good news is that anyone can use the Benchmark to find out exactly what the strengths and weaknesses of their tools are. You don’t have to trust vendor claims. All you do is clone the OWASP Benchmark git repository, run your tool on it, and feed the report into the Benchmark’s scoring tool.

OWASP reports that the best static analysis tools score in the low 30’s (out of 100) against this benchmark. Dynamic analysis tools fared even more poorly. What jumps out is that static tools do very poorly on any type of vulnerability that involves data flow, particularly injection flaws. They do best on problems like weak random number generation that tend to be isolated to a single line of code.

SAST disrupts software development
There are two major problems with Jason’s criticisms of IAST performance. First, modern IAST is blazingly fast. But more importantly, you use IAST to find vulnerabilities during development and test, not production. So let’s talk about the metrics that matter during development and test.

Continuous, real-time feedback is a natural process fit for high-speed modern software development methodologies like Agile and DevOps. And since it’s a distributed approach, it works in parallel across all your applications.

On the other hand, static tools take hours or days to analyze a single application. And because you need experts, tons of RAM, and tons of CPU to analyze that single application, it’s very difficult to parallelize. I’ve seen this so many times before -- static ends up being massively disruptive to software development processes, resulting in the tool being shelved or used very rarely. When you get thousands of false alarms and each one takes between 10 minutes and several hours to track down, it’s impossible to dismiss the cost of inaccuracy.

SAST Coverage is an illusion
Static tools only see code they can follow, which is why modern frameworks are so difficult for them. Libraries and third-party components are too big to analyze statically, which results in numerous “lost sources” and “lost sinks” – toolspeak for “we have no idea what happened inside this library.” Static tools also silently quit analyzing when things get too complicated.

Try running a static analysis tool on an API, web service, or REST endpoint. The tool will run but it won’t find anything because it can’t understand the framework. And you’ll have no idea what code was and wasn’t analyzed. This false sense of security might be more dangerous than running no tool at all.

Unlike static tools that provide zero visibility into code coverage, with IAST you control exactly what code gets tested. You can simply use your normal development, test, and integration processes and your normal coverage tools. If you don’t have good test coverage, use a simple crawler or record a Selenium script to play on your CI server.

AppSec automation will empower everyone
If you care about achieving application security at scale, then your highest priority should be to move to tools that empower everyone, not just security experts. Check the OWASP Benchmark Project and find out the strengths and weaknesses of the tools you’re considering.

Whether legacy tool vendors like it or not, instrumentation will change application security forever. Consider what New Relic did to the performance market. Their agent uses instrumentation to measure performance highly accurately from within a running application. And it changed their industry from being dominated by tools for experts and PDF reports to one where everyone is empowered to do their own performance engineering. We can do the same for application security.

Related content:
What Do You Mean My Security Tools Don’t Work on APIs?!! by Jeff Williams
Software Security Is Hard But Not impossible by Jason Schmitt

 

A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
KevGreene_Cyber
50%
50%
KevGreene_Cyber,
User Rank: Author
11/29/2015 | 4:11:11 PM
Re: SAST, DAST, and IAST all important testing technologies
I agree... SAST and DAST are too important to dismiss. We have to find a way to leverage the strength of each, at the same time work to continually evolve and raise the bar for these tools.  
johannacuriel
0%
100%
johannacuriel,
User Rank: Apprentice
10/1/2015 | 8:02:51 AM
OWASP Benchmark official review at OWASP
I did a review of this project as part of the Project review team at OWASP. It has never been my impression that the OWASP Benchmark project has been promoted within the OWASP community as a 'ready' to use tool but rather as a tool in development stage. It is clear to me that the tool still needs a lot of testing and even so, it will not be able to 'benchmark' all the features of a SAST tool for example, if the tool being benchmarked in not able to produce a complete XML output reports with all its results.

More details about OWASP Benchmark Project review:

https://drive.google.com/file/d/0B28S4R_cON7JZGtNZVRrMDl3NzQ/view?pli=1

https://groups.google.com/a/owasp.org/forum/?hl=en#!topic/projects-task-force/h1leFW8e8zE
planetlevel
100%
0%
planetlevel,
User Rank: Author
9/23/2015 | 12:34:36 PM
Re: What would a benchmark against Coverity show?
In reponse to a few private inquiries I want to make very clear that I think static analysis for many kinds of *quality* issues is fantastic.  FindBugs in particular has an excellent accuracy record for non-security bugs and it has helped me improve my code in the past.
planetlevel
100%
0%
planetlevel,
User Rank: Author
9/23/2015 | 12:14:57 AM
Re: What would a benchmark against Coverity show?
The OWASP Benchmark Project supports for many different commercial and open source tools, including: 
  • Findbugs
  • HP Fortify
  • PMD
  • IBM AppScan
  • Veracode
  • CheckMarx
  • Synopsys Coverity
  • Parasoft
  • SonarQube

The picture in the article is FindBugs (security) but that's just one example of (pretty poor) static analysis capability.  The Benchmark also looks at many dynamic scanning tools.  The results are fascinating.

And as I mentioned, easily reproduceable.  If you have Coverity, just clone the benchmark project and run Coverity on it.  Then feed the results into the Benchmark scoring tool and get a report on exactly what you want to see.

I appreciate your optimism, but it's amazing what you find out when you actually measure.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Ninja
9/22/2015 | 9:09:07 PM
What would a benchmark against Coverity show?
Author Jeff Williams uses OWASP benchmark results against FindBugs to disparage the effectiveness of static analsysis of code, and I haven't heard much about FindBugs. I'd be more interested in what a benchmark against Coverity or one of the other more prominent static analysis tools might show. I suspect static analysis has done too much good for too long for it to be dismissed as easily as Willaims does, with his confidence in the brilliance of Contrast Security's interactive application security testing. Might they each excel at different things?
cwysopal
67%
33%
cwysopal,
User Rank: Apprentice
9/22/2015 | 12:42:47 PM
SAST, DAST, and IAST all important testing technologies

IAST is a great testing technique that has some advantages that SAST and DAST do not have. But there are clearly strengths that SAST and DAST have that don't exist in IAST.  It's not time to throw away your SAST and DAST investment. A mature app sec program combines approaches to maximize strengths an minimize weaknesses.

SAST doesn't require a running system with test data and automated test suites.  This allows SAST to be used earlier in the dev cycle when it is least expensive to fix flaws.  DAST doesn't require modifying the production environment so you don't need find the server the app is running on, get the approval, schedule a change and contact an administrator to modify it. This allows DAST to be used more easily in production. Web apps can be scanned by just knowing the URL of the application. Finding the best way to combine techniqes will give you the best application security.

-Chris

Companies Blindly Believe They've Locked Down Users' Mobile Use
Dawn Kawamoto, Associate Editor, Dark Reading,  11/14/2017
Microsoft Word Vuln Went Unnoticed for 17 Years: Report
Kelly Sheridan, Associate Editor, Dark Reading,  11/14/2017
121 Pieces of Malware Flagged on NSA Employee's Home Computer
Kelly Jackson Higgins, Executive Editor at Dark Reading,  11/16/2017
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Managing Cyber-Risk
An online breach could have a huge impact on your organization. Here are some strategies for measuring and managing that risk.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-0290
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

CVE-2016-10369
Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

CVE-2016-8202
Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

CVE-2016-8209
Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

CVE-2017-0890
Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.