Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

9/22/2015
10:31 AM
Jason Schmitt
Jason Schmitt
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
100%
0%

The Common Core Of Application Security

Why you will never succeed by teaching to the test.

As the debate with Jeff Williams continues on the best approach to application security, I’m struck by the fact that, despite everything I said about the right way to secure software, all he heard was “static analysis.” So I am going to agree with him on one point: you should not just buy a static analysis tool, run it and do what it says. My team at HP Fortify sells “the most broadly adopted SAST tool in the market,” according to the most recent Gartner Magic Quadrant for Application Security Testing, but that SAST tool is just one element necessary for success in software security.

[Read Jeff’s point of view in Why It’s Insane to Trust Static Analysis.]

You should instead take a proactive, systemic, and disciplined approach to changing the way you develop and buy software. You should educate your team on application security fundamentals and secure coding practices. You should develop security goals and policies, and implement a program with effective governance to achieve those goals and track and enforce your policies and progress. Then, and only then, should you bring in technology that helps you automate and scale the program foundation that you’ve designed and implemented. You will fail at this if you expect to buy a tool from us or anyone else and implement it without either having or hiring security experts.

What success looks like
As I mentioned, only after tackling the people and process challenge can you start thinking about technology. A single application security tool will never be enough to solve the difficult software security problem. We have had success in helping our customers secure software for 15 years because we offer market leading products in every category of application security – SAST, DAST, IAST, and RASP. All of these technologies are highly integrated not only with each other, but also with the other standard systems that our customers use, such as build automation and bug tracking tools. They all work in concert to produce not only accurate results but also relevant results tailored to the needs of a specific organization. We’ve also introduced new analytics technology that will further optimize the results from our products to minimize the volume of vulnerabilities developers have to remediate.

As you can tell, we have spent a lot of time thinking about how NOT to “disrupt software development.” We aren’t just thinking about it, though. With our customers, we’ve proven repeatedly that this approach delivers sustainable ROI and risk reduction. 

Let’s look at what a few of our customers achieve, based on our own internal reviews and testing, by the numbers:

  • 100 million – Lines of code a customer has scanned and remediated vulnerabilities from using our SAST technology
  • 10,000 – Number of vulnerabilities removed per month by a customer all of the applications in their organization using our DAST technology
  • 3,000 – Number of applications across at least 10 programming languages that a customer scans weekly to identify and remediate all Critical, High, and Medium vulnerabilities using our SAST technology
  • 1,000+ – Customers who have our IAST technology for improving the coverage, speed, and relevance of web app security testing
  • 300 – Number of production applications a customer uses our RASP technology to protect against attack and achieve PCI compliance

Each of these customers is unique in their business focus and challenge. What they all share is an awareness that they couldn’t achieve such results with a single tool working in isolation.

Take the test
But since Jeff really wants to talk about static analysis, let’s look at some numbers there, too. Let’s start with the OWASP Webgoat Benchmark Project to set the scene a bit better and compare results. Let’s first remember that the O in OWASP is for “Open,” and their commitment to radical transparency is what makes them such a valuable asset in security. The cause of application security will improve dramatically with collaboration, openness, and transparency, and my team commits a lot of time and resources to helping the cause with OWASP and other industry and government groups.

After my team received the latest version of the OWASP Webgoat Benchmark tests, we assessed its completeness, quality, and relevance in benchmarking application security tools against each other and ran our own HP Fortify Static Code Analyzer (SCA) product against the tests. Here’s how we did:

Table 1: HP Fortify Static Code Analyzer Results against OWASP Webgoat Benchmark v1.1
Number of Benchmark Tests 21,041
True Positives detected by Fortify SCA, and declared Insecure by Benchmark 11,835 100% true positive rate
False Negatives reported by Fortify SCA 0 100% false negative rate
True Positives detected by Fortify SCA, and declared Secure by Benchmark 9,206 44% of Benchmark tests
False Positives reported by Fortify SCA 4,852 23% of Benchmark tests

In layman’s terms, we found 100% of the security issues that are supposed to be found in the test. We also found that a further 44% of the tests contained vulnerabilities that were declared secure by the Benchmark project. That means we found and manually verified over 9,000 of the test cases that were supposed to be secure, but in fact contained security vulnerabilities. These were either valid vulnerabilities of a different type than what the test intended to flag, or valid vulnerabilities in “dead code” that you can only find through static analysis.

Were there false positives in SCA against the benchmark? Yes, and thanks to the OWASP Benchmark, we’re fixing them as your read this. You will never hear me or my team say that there is an effective security tool with no false positives, because it doesn’t exist.

HP Fortify SCA found 9,206 real security issues in this test that the benchmark itself and Jeff’s IAST solution declares “secure.” Impartial, third-party benchmarks are very important to this industry, but the bar should be set very high on quality, comprehensiveness, and transparency. My team will continue to collaborate with the NIST Software Assurance Metrics and Tools Evaluation (SAMATE) project to foster a complete, impartial, and vendor-neutral benchmark of software security technologies.

Finally, it comes down to some simple questions. Would you rather teach to the test and ignore the broader world? To take the easy way out and feel better that you found something with just a little bit of effort? Or would you rather have a depth of knowledge for anything thrown at you, and assurance that you’ve found and fixed every vulnerability that matters?

That’s what software security assurance is about – applying appropriate process, people, and technology to find and fix vulnerabilities that matter, using a variety of analysis technologies to achieve optimal coverage and accuracy, efficiently, and at scale.

Which approach would you trust with your software? And your job?

Related content:
What Do You Mean My Security Tools Don’t Work on APIs?!! by Jeff Williams
Software Security Is Hard But Not impossible by Jason Schmitt

 

 

Jason Schmitt is vice president and general manager of the Fortify business within the HP Enterprise Security Products organization. In this role, he is responsible for driving the growth of Fortify's software security business and managing all operational functions within ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
DaveWichers
50%
50%
DaveWichers,
User Rank: Apprentice
9/25/2015 | 5:49:01 PM
Re: OWASP Benchmark Clarifications
Yes. That's me. And your concern is fair and you aren't the first to bring it up. We are addressing this by making everything free and open and reproducible by anyone, and more importantly, getting lots more people involved, so we can eliminate any potential for bias. We already have a number of open source projects contributing to the project, and a bunch of commercial vendors and even some non-vendors approached me at this week's AppSec USA conference and asked to get involved, which I welcome whole heartedly. We are going to expand the team to as many who want to participate, and ensure there are many eyes and many contributors to the work we produce. The OWASP Board has expressed their support for this project, as it's exactly the kind of thing OWASP should be doing. This project is really getting some momentum and together we can all make it great. Please contribute if it's of interest to you.
BigJim2
50%
50%
BigJim2,
User Rank: Apprentice
9/25/2015 | 9:57:08 AM
Re: OWASP Benchmark Clarifications
Is this the same Dave Wichers that is a co-founder of Aspect Security, the company that created Constrast? It seems like a conflict of interest for someone from a vendor to create a benchmark that will be used to grade their competition. I'm even more surprised that a government agency (DHS) would sponsor that activity.
DaveWichers
50%
50%
DaveWichers,
User Rank: Apprentice
9/23/2015 | 3:42:42 PM
OWASP Benchmark Clarifications
Jason, this discussion is great and I'm thrilled that the OWASP Benchmark is driving improvements in application vulnerability detection tools. But I did want to add a few clarifications on how the Benchmark works.

In your Benchmark results table, you indicate: "True Positives detected by Fortify SCA, and declared Secure by Benchmark" - 9206. While its great that Fortify found all these additional vulnerabilities in the Benchmark, the Benchmark makes no claims there are no other vulnerabilities in it beyond the ones specifically tested for and scored. Any such results found by any tool are simply ignored in the Benchmark scoring system, so they have no effect on the score one way or the other. So, saying that Fortify found a bunch of issues the project wasn't aware of and other tools did not find simply isn't accurate. Most of the tools we tested found a bunch of additional issues, just like Fortify did.

As part of our 1.2 effort, we have eliminated a number of unintended vulnerabilities of the type tested for in the Benchmark, particularly XSS. This is an ongoing effort and we have more work to do there. In fact, if you can send us your results, we'll be happy to use them to help us track down and eliminate more of them. That said, these 'extra' vulnerabilities are, and should be, ignored as they simply aren't measured/scored.

You also mention: "False Positives reported by Fortify SCA" - 4,852.  In the Benchmark v1.1, there are 9206 True Negative test cases, Meaning 9206 test cases that are safe, and do not possess the type of vulnerability they are testing for. And Fortify reported 4,852 of the as actual vulnerabilities (False Positives as you said). The Benchmark project scores that as 4,852 out of 9,206, which is a 52.7% False Positive rate. So if your True Positive rate is actually 100% as you claim, the Benchmark would produce an average score for Fortify as 100% - 52.7% which equals 47.3%. This average score for Fortify is higher than the scores the project has seen with the results we were able to generate so we are pleased to see that your team's efforts have improved its score against the Benchmark and that your customer's will ultimately benefit from these improvements.

I think discussions like this are incredibly healthy and hope lots of vendors for both commercial and free tools will get involved to make both the OWASP Benchmark project and their tools better for the community we both serve.  And given the amount of discussions I'm having with project participants at OWASP, the discussions are just getting started, and many tools, including Fortify, are getting better already. And in fact, I'm going to talk about exactly that at my OWASP AppSec USA talk on the Benchmark project tomorrow afternoon at 4. If any of you are around, please come by!!

Dave Wichers

OWASP Benchmark Project Lead
Data Leak Week: Billions of Sensitive Files Exposed Online
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/10/2019
Intel Issues Fix for 'Plundervolt' SGX Flaw
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/11/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-5252
PUBLISHED: 2019-12-14
There is an improper authentication vulnerability in Huawei smartphones (Y9, Honor 8X, Honor 9 Lite, Honor 9i, Y6 Pro). The applock does not perform a sufficient authentication in a rare condition. Successful exploit could allow the attacker to use the application locked by applock in an instant.
CVE-2019-5235
PUBLISHED: 2019-12-14
Some Huawei smart phones have a null pointer dereference vulnerability. An attacker crafts specific packets and sends to the affected product to exploit this vulnerability. Successful exploitation may cause the affected phone to be abnormal.
CVE-2019-5264
PUBLISHED: 2019-12-13
There is an information disclosure vulnerability in certain Huawei smartphones (Mate 10;Mate 10 Pro;Honor V10;Changxiang 7S;P-smart;Changxiang 8 Plus;Y9 2018;Honor 9 Lite;Honor 9i;Mate 9). The software does not properly handle certain information of applications locked by applock in a rare condition...
CVE-2019-5277
PUBLISHED: 2019-12-13
Huawei CloudUSM-EUA V600R006C10;V600R019C00 have an information leak vulnerability. Due to improper configuration, the attacker may cause information leak by successful exploitation.
CVE-2019-5254
PUBLISHED: 2019-12-13
Certain Huawei products (AP2000;IPS Module;NGFW Module;NIP6300;NIP6600;NIP6800;S5700;SVN5600;SVN5800;SVN5800-C;SeMG9811;Secospace AntiDDoS8000;Secospace USG6300;Secospace USG6500;Secospace USG6600;USG6000V;eSpace U1981) have an out-of-bounds read vulnerability. An attacker who logs in to the board m...