9/22/2015
10:31 AM
Jason Schmitt
Jason Schmitt
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv

The Common Core Of Application Security

Why you will never succeed by teaching to the test.



As the debate with Jeff Williams continues on the best approach to application security, I’m struck by the fact that, despite everything I said about the right way to secure software, all he heard was “static analysis.” So I am going to agree with him on one point: you should not just buy a static analysis tool, run it and do what it says. My team at HP Fortify sells “the most broadly adopted SAST tool in the market,” according to the most recent Gartner Magic Quadrant for Application Security Testing, but that SAST tool is just one element necessary for success in software security.

[Read Jeff’s point of view in Why It’s Insane to Trust Static Analysis.]

You should instead take a proactive, systemic, and disciplined approach to changing the way you develop and buy software. You should educate your team on application security fundamentals and secure coding practices. You should develop security goals and policies, and implement a program with effective governance to achieve those goals and track and enforce your policies and progress. Then, and only then, should you bring in technology that helps you automate and scale the program foundation that you’ve designed and implemented. You will fail at this if you expect to buy a tool from us or anyone else and implement it without either having or hiring security experts.

What success looks like
As I mentioned, only after tackling the people and process challenge can you start thinking about technology. A single application security tool will never be enough to solve the difficult software security problem. We have had success in helping our customers secure software for 15 years because we offer market leading products in every category of application security – SAST, DAST, IAST, and RASP. All of these technologies are highly integrated not only with each other, but also with the other standard systems that our customers use, such as build automation and bug tracking tools. They all work in concert to produce not only accurate results but also relevant results tailored to the needs of a specific organization. We’ve also introduced new analytics technology that will further optimize the results from our products to minimize the volume of vulnerabilities developers have to remediate.

As you can tell, we have spent a lot of time thinking about how NOT to “disrupt software development.” We aren’t just thinking about it, though. With our customers, we’ve proven repeatedly that this approach delivers sustainable ROI and risk reduction. 

Let’s look at what a few of our customers achieve, based on our own internal reviews and testing, by the numbers:

  • 100 million – Lines of code a customer has scanned and remediated vulnerabilities from using our SAST technology
  • 10,000 – Number of vulnerabilities removed per month by a customer all of the applications in their organization using our DAST technology
  • 3,000 – Number of applications across at least 10 programming languages that a customer scans weekly to identify and remediate all Critical, High, and Medium vulnerabilities using our SAST technology
  • 1,000+ – Customers who have our IAST technology for improving the coverage, speed, and relevance of web app security testing
  • 300 – Number of production applications a customer uses our RASP technology to protect against attack and achieve PCI compliance

Each of these customers is unique in their business focus and challenge. What they all share is an awareness that they couldn’t achieve such results with a single tool working in isolation.

Take the test
But since Jeff really wants to talk about static analysis, let’s look at some numbers there, too. Let’s start with the OWASP Webgoat Benchmark Project to set the scene a bit better and compare results. Let’s first remember that the O in OWASP is for “Open,” and their commitment to radical transparency is what makes them such a valuable asset in security. The cause of application security will improve dramatically with collaboration, openness, and transparency, and my team commits a lot of time and resources to helping the cause with OWASP and other industry and government groups.

After my team received the latest version of the OWASP Webgoat Benchmark tests, we assessed its completeness, quality, and relevance in benchmarking application security tools against each other and ran our own HP Fortify Static Code Analyzer (SCA) product against the tests. Here’s how we did:

Table 1: HP Fortify Static Code Analyzer Results against OWASP Webgoat Benchmark v1.1

Number of Benchmark Tests 21,041
True Positives detected by Fortify SCA, and declared Insecure by Benchmark 11,835 100% true positive rate
False Negatives reported by Fortify SCA 0 100% false negative rate
True Positives detected by Fortify SCA, and declared Secure by Benchmark 9,206 44% of Benchmark tests
False Positives reported by Fortify SCA 4,852 23% of Benchmark tests

In layman’s terms, we found 100% of the security issues that are supposed to be found in the test. We also found that a further 44% of the tests contained vulnerabilities that were declared secure by the Benchmark project. That means we found and manually verified over 9,000 of the test cases that were supposed to be secure, but in fact contained security vulnerabilities. These were either valid vulnerabilities of a different type than what the test intended to flag, or valid vulnerabilities in “dead code” that you can only find through static analysis.

Were there false positives in SCA against the benchmark? Yes, and thanks to the OWASP Benchmark, we’re fixing them as your read this. You will never hear me or my team say that there is an effective security tool with no false positives, because it doesn’t exist.

HP Fortify SCA found 9,206 real security issues in this test that the benchmark itself and Jeff’s IAST solution declares “secure.” Impartial, third-party benchmarks are very important to this industry, but the bar should be set very high on quality, comprehensiveness, and transparency. My team will continue to collaborate with the NIST Software Assurance Metrics and Tools Evaluation (SAMATE) project to foster a complete, impartial, and vendor-neutral benchmark of software security technologies.

Finally, it comes down to some simple questions. Would you rather teach to the test and ignore the broader world? To take the easy way out and feel better that you found something with just a little bit of effort? Or would you rather have a depth of knowledge for anything thrown at you, and assurance that you’ve found and fixed every vulnerability that matters?

That’s what software security assurance is about – applying appropriate process, people, and technology to find and fix vulnerabilities that matter, using a variety of analysis technologies to achieve optimal coverage and accuracy, efficiently, and at scale.

Which approach would you trust with your software? And your job?

Related content:
What Do You Mean My Security Tools Don’t Work on APIs?!! by Jeff Williams
Software Security Is Hard But Not impossible by Jason Schmitt

 

 

Jason Schmitt is vice president and general manager of the Fortify business within the HP Enterprise Security Products organization. In this role, he is responsible for driving the growth of Fortify's software security business and managing all operational functions within ... View Full Bio
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service