Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

10:31 AM
Jason Schmitt
Jason Schmitt
Connect Directly
E-Mail vvv

The Common Core Of Application Security

Why you will never succeed by teaching to the test.

As the debate with Jeff Williams continues on the best approach to application security, I’m struck by the fact that, despite everything I said about the right way to secure software, all he heard was “static analysis.” So I am going to agree with him on one point: you should not just buy a static analysis tool, run it and do what it says. My team at HP Fortify sells “the most broadly adopted SAST tool in the market,” according to the most recent Gartner Magic Quadrant for Application Security Testing, but that SAST tool is just one element necessary for success in software security.

[Read Jeff’s point of view in Why It’s Insane to Trust Static Analysis.]

You should instead take a proactive, systemic, and disciplined approach to changing the way you develop and buy software. You should educate your team on application security fundamentals and secure coding practices. You should develop security goals and policies, and implement a program with effective governance to achieve those goals and track and enforce your policies and progress. Then, and only then, should you bring in technology that helps you automate and scale the program foundation that you’ve designed and implemented. You will fail at this if you expect to buy a tool from us or anyone else and implement it without either having or hiring security experts.

What success looks like
As I mentioned, only after tackling the people and process challenge can you start thinking about technology. A single application security tool will never be enough to solve the difficult software security problem. We have had success in helping our customers secure software for 15 years because we offer market leading products in every category of application security – SAST, DAST, IAST, and RASP. All of these technologies are highly integrated not only with each other, but also with the other standard systems that our customers use, such as build automation and bug tracking tools. They all work in concert to produce not only accurate results but also relevant results tailored to the needs of a specific organization. We’ve also introduced new analytics technology that will further optimize the results from our products to minimize the volume of vulnerabilities developers have to remediate.

As you can tell, we have spent a lot of time thinking about how NOT to “disrupt software development.” We aren’t just thinking about it, though. With our customers, we’ve proven repeatedly that this approach delivers sustainable ROI and risk reduction. 

Let’s look at what a few of our customers achieve, based on our own internal reviews and testing, by the numbers:

  • 100 million – Lines of code a customer has scanned and remediated vulnerabilities from using our SAST technology
  • 10,000 – Number of vulnerabilities removed per month by a customer all of the applications in their organization using our DAST technology
  • 3,000 – Number of applications across at least 10 programming languages that a customer scans weekly to identify and remediate all Critical, High, and Medium vulnerabilities using our SAST technology
  • 1,000+ – Customers who have our IAST technology for improving the coverage, speed, and relevance of web app security testing
  • 300 – Number of production applications a customer uses our RASP technology to protect against attack and achieve PCI compliance

Each of these customers is unique in their business focus and challenge. What they all share is an awareness that they couldn’t achieve such results with a single tool working in isolation.

Take the test
But since Jeff really wants to talk about static analysis, let’s look at some numbers there, too. Let’s start with the OWASP Webgoat Benchmark Project to set the scene a bit better and compare results. Let’s first remember that the O in OWASP is for “Open,” and their commitment to radical transparency is what makes them such a valuable asset in security. The cause of application security will improve dramatically with collaboration, openness, and transparency, and my team commits a lot of time and resources to helping the cause with OWASP and other industry and government groups.

After my team received the latest version of the OWASP Webgoat Benchmark tests, we assessed its completeness, quality, and relevance in benchmarking application security tools against each other and ran our own HP Fortify Static Code Analyzer (SCA) product against the tests. Here’s how we did:

Table 1: HP Fortify Static Code Analyzer Results against OWASP Webgoat Benchmark v1.1
Number of Benchmark Tests 21,041
True Positives detected by Fortify SCA, and declared Insecure by Benchmark 11,835 100% true positive rate
False Negatives reported by Fortify SCA 0 100% false negative rate
True Positives detected by Fortify SCA, and declared Secure by Benchmark 9,206 44% of Benchmark tests
False Positives reported by Fortify SCA 4,852 23% of Benchmark tests

In layman’s terms, we found 100% of the security issues that are supposed to be found in the test. We also found that a further 44% of the tests contained vulnerabilities that were declared secure by the Benchmark project. That means we found and manually verified over 9,000 of the test cases that were supposed to be secure, but in fact contained security vulnerabilities. These were either valid vulnerabilities of a different type than what the test intended to flag, or valid vulnerabilities in “dead code” that you can only find through static analysis.

Were there false positives in SCA against the benchmark? Yes, and thanks to the OWASP Benchmark, we’re fixing them as your read this. You will never hear me or my team say that there is an effective security tool with no false positives, because it doesn’t exist.

HP Fortify SCA found 9,206 real security issues in this test that the benchmark itself and Jeff’s IAST solution declares “secure.” Impartial, third-party benchmarks are very important to this industry, but the bar should be set very high on quality, comprehensiveness, and transparency. My team will continue to collaborate with the NIST Software Assurance Metrics and Tools Evaluation (SAMATE) project to foster a complete, impartial, and vendor-neutral benchmark of software security technologies.

Finally, it comes down to some simple questions. Would you rather teach to the test and ignore the broader world? To take the easy way out and feel better that you found something with just a little bit of effort? Or would you rather have a depth of knowledge for anything thrown at you, and assurance that you’ve found and fixed every vulnerability that matters?

That’s what software security assurance is about – applying appropriate process, people, and technology to find and fix vulnerabilities that matter, using a variety of analysis technologies to achieve optimal coverage and accuracy, efficiently, and at scale.

Which approach would you trust with your software? And your job?

Related content:
What Do You Mean My Security Tools Don’t Work on APIs?!! by Jeff Williams
Software Security Is Hard But Not impossible by Jason Schmitt



Jason Schmitt is vice president and general manager of the Fortify business within the HP Enterprise Security Products organization. In this role, he is responsible for driving the growth of Fortify's software security business and managing all operational functions within ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
9/25/2015 | 5:49:01 PM
Re: OWASP Benchmark Clarifications
Yes. That's me. And your concern is fair and you aren't the first to bring it up. We are addressing this by making everything free and open and reproducible by anyone, and more importantly, getting lots more people involved, so we can eliminate any potential for bias. We already have a number of open source projects contributing to the project, and a bunch of commercial vendors and even some non-vendors approached me at this week's AppSec USA conference and asked to get involved, which I welcome whole heartedly. We are going to expand the team to as many who want to participate, and ensure there are many eyes and many contributors to the work we produce. The OWASP Board has expressed their support for this project, as it's exactly the kind of thing OWASP should be doing. This project is really getting some momentum and together we can all make it great. Please contribute if it's of interest to you.
User Rank: Apprentice
9/25/2015 | 9:57:08 AM
Re: OWASP Benchmark Clarifications
Is this the same Dave Wichers that is a co-founder of Aspect Security, the company that created Constrast? It seems like a conflict of interest for someone from a vendor to create a benchmark that will be used to grade their competition. I'm even more surprised that a government agency (DHS) would sponsor that activity.
User Rank: Apprentice
9/23/2015 | 3:42:42 PM
OWASP Benchmark Clarifications
Jason, this discussion is great and I'm thrilled that the OWASP Benchmark is driving improvements in application vulnerability detection tools. But I did want to add a few clarifications on how the Benchmark works.

In your Benchmark results table, you indicate: "True Positives detected by Fortify SCA, and declared Secure by Benchmark" - 9206. While its great that Fortify found all these additional vulnerabilities in the Benchmark, the Benchmark makes no claims there are no other vulnerabilities in it beyond the ones specifically tested for and scored. Any such results found by any tool are simply ignored in the Benchmark scoring system, so they have no effect on the score one way or the other. So, saying that Fortify found a bunch of issues the project wasn't aware of and other tools did not find simply isn't accurate. Most of the tools we tested found a bunch of additional issues, just like Fortify did.

As part of our 1.2 effort, we have eliminated a number of unintended vulnerabilities of the type tested for in the Benchmark, particularly XSS. This is an ongoing effort and we have more work to do there. In fact, if you can send us your results, we'll be happy to use them to help us track down and eliminate more of them. That said, these 'extra' vulnerabilities are, and should be, ignored as they simply aren't measured/scored.

You also mention: "False Positives reported by Fortify SCA" - 4,852.  In the Benchmark v1.1, there are 9206 True Negative test cases, Meaning 9206 test cases that are safe, and do not possess the type of vulnerability they are testing for. And Fortify reported 4,852 of the as actual vulnerabilities (False Positives as you said). The Benchmark project scores that as 4,852 out of 9,206, which is a 52.7% False Positive rate. So if your True Positive rate is actually 100% as you claim, the Benchmark would produce an average score for Fortify as 100% - 52.7% which equals 47.3%. This average score for Fortify is higher than the scores the project has seen with the results we were able to generate so we are pleased to see that your team's efforts have improved its score against the Benchmark and that your customer's will ultimately benefit from these improvements.

I think discussions like this are incredibly healthy and hope lots of vendors for both commercial and free tools will get involved to make both the OWASP Benchmark project and their tools better for the community we both serve.  And given the amount of discussions I'm having with project participants at OWASP, the discussions are just getting started, and many tools, including Fortify, are getting better already. And in fact, I'm going to talk about exactly that at my OWASP AppSec USA talk on the Benchmark project tomorrow afternoon at 4. If any of you are around, please come by!!

Dave Wichers

OWASP Benchmark Project Lead
A Startup With NSA Roots Wants Silently Disarming Cyberattacks on the Wire to Become the Norm
Kelly Jackson Higgins, Executive Editor at Dark Reading,  5/11/2021
Cybersecurity: What Is Truly Essential?
Joshua Goldfarb, Director of Product Management at F5,  5/12/2021
3 Cybersecurity Myths to Bust
Etay Maor, Sr. Director Security Strategy at Cato Networks,  5/11/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-05-18
In Boostnote 0.12.1, exporting to PDF contains opportunities for XSS attacks.
PUBLISHED: 2021-05-18
Mikrotik RouterOs prior to stable 6.47 suffers from a memory corruption vulnerability in the /nova/bin/bfd process. An authenticated remote attacker can cause a Denial of Service (NULL pointer dereference).
PUBLISHED: 2021-05-18
Mikrotik RouterOs stable 6.47 suffers from a memory corruption vulnerability in the /nova/bin/diskd process. An authenticated remote attacker can cause a Denial of Service due to invalid memory access.
PUBLISHED: 2021-05-18
Mikrotik RouterOs stable 6.46.3 suffers from a memory corruption vulnerability in the log process. An authenticated remote attacker can cause a Denial of Service due to improper memory access.
PUBLISHED: 2021-05-18
Mikrotik RouterOs stable 6.46.3 suffers from a memory corruption vulnerability in the mactel process. An authenticated remote attacker can cause a Denial of Service due to improper memory access.