‘Re-innovating’ Static Analysis: 4 Steps

Before we pronounce the death of static analysis, let’s raise the bar with a modern framework that keeps pace with the complexity and size found in today’s software.

Kevin E. Greene, Public Sector CTO, OpenText Cybersecurity

December 9, 2015

6 Min Read
Dark Reading logo in a gray background | Dark Reading

Static analysis isn’t dead like some have suggested.  Has static analysis lost some of it’s luster?  Absolutely! Many of the studies would suggest that static analysis tools (commercial and open-source) are underperforming on certain types of bugs or weakness classes. But one of the reasons why I like tool studies is because they help you understand what a tool can and cannot do -- provided that you have developed the test cases to measure whether or not the tool actually detected the coding issue or violation. 

Tool studies also help you better understand the behavior and characteristics of static analysis tools for a given code construct or different styles of coding.  Static analysis tools perform differently on different program structures, and understanding why tools fail on certain types of code is important to know (with confidence) if we are going to raise the bar in static analysis capabilities and innovate. 

The results of many of these tool studies haven’t been favorable. In fact, one could argue that given the simplicity of the test cases used, static analysis tools should be performing much better. For instance, the Juliet Test Case suite that was funded by NSA Center for Assured Software is a collection of Java and C/C++ synthetic, meaning they are created as examples with well-characterized weaknesses. 

A criticism of Juliet is that the test cases don’t represent “real” world software. Given that the test cases are less complex than real-world programs, and are synthetic, you might expect that tools would perform much stronger, but that hasn’t been the case. I’m aware of at least four tool studies where the test results have been mediocre across the board -- OWASP Benchmark, NIST Static Analysis Tool Exposition (SATE), NSA Center for Assured Software, and a project funded at IUPUI, led by Dr. James Hill. 

One revelation from the tool studies is that each tool does something really well; a “sweet spot.”  Most of the tools have several sweet spots, but outside of them, the tools tremendously underperform. It should be noted that overall, the commercial static analysis tools fair better than the open-source tools, but some studies suggest that open-source tools may be better at finding a particular weakness.    

Improving static analysis

Static analysis tools are not dead; they just need to be updated to keep pace with modern-day software. There needs to be more emphasis and investment in research and development by the software assurance community to find new breakthroughs and advancements in techniques to improve static analysis capabilities.  

Organizations who buy static analysis tools have to put more pressure on commercial tool vendors to invest more in R&D so that tools can be modernized and improved. Adding rules and heuristics is not fixing the problem long-term, nor does it provide the innovation to keep pace with the evolution in software. We’ve seen with the Heartbleed vulnerability in OpenSSL, that vendors can add rules and heuristics to identify the weakness that exposed the Heartbleed vulnerability (after the fact).  The fact that none of the tools were able to detect the weakness that exposed the vulnerability can be summarized as the crux of the problem with static analysis tools and capabilities. 

I want to share with you a research project that I’m funding to push forward the state-of-the-art in static analysis capabilities. The Static Analysis Tool Modernization Project (STAMP) research is an attempt to address the lack of innovation around static analysis tools. The goal of STAMP is to modernize static analysis tools, creating better techniques that can scale the complexity and size of today’s software. The inspiration for STAMP came from the HGTV show, Property Brothers, where brothers find neglected homes and infuse money into the homes to renovate them. STAMP has the potential to renovate (re-innovate) static analysis capabilities. STAMP will focus on four key areas:

1. Develop improved code constructs and test cases that represent “real” world programs (modern software). This will address some of the shortcomings of Juliet, and to a certain extent some of the new test case suites such as the OWASP Benchmark project. The next generation of test cases developed in STAMP will help baseline existing state-of-the-art static analysis tools. 

2. Conduct an in-depth tool study to understand what tools can and cannot do it terms of tool coverage across the various weakness classes. By identifying the gaps and strengths in static analysis tools, this will help identify the areas where static analysis capabilities need to be “modernized”.

3. Develop a modernization framework to improve the capabilities in static analysis tools.  Engaging in R&D to develop a framework to explore new techniques, methods, and services will help make static analysis tools more precise and sound and achieve what many call “security at-speed.” 

4. Score and label static analysis tools and capabilities based on areas where tools perform well, and areas where tools struggle in regards to tool coverage. A consumer report will be developed to better educate and guide the software assurance community in purchasing and procuring static analysis capabilities. Oftentimes when you purchase or procure a static analysis tool, you don’t really know what the tools missed. The scoring and labeling will help organizations mix and match features in static analysis to leverage the strength of each tool(s) to cover a wider attack surface.  

One of the interesting and unique aspects in working with researchers and computer scientists who study the area of static analysis, users of commercial static analysis tools, and the commercial tool vendors, is that I get so much useful information about problem areas. One common theme that I hear is that no one tool can give you the coverage you need. Organizations should be able to read a label on a given static analysis tool (the same way nutrient labels are on foods) to understand the strengths, the “sweet spotsof static analysis tools. 

Before we pronounce the end or death of static analysis, let’s see what innovation and improvements STAMP will provide to help raise the bar in static analysis tools and capabilities. Static analysis is just one context --  like DAST and IAST -- that can be leveraged to help reduce false positives, but also provide more visibility into “real” bugs and potential vulnerabilities that exist in software. 

All application security testing approaches have their limitations; to think that one is superior than the other is a bit naive. I’ve funded research that’s shown how Hybrid Analysis Security Testing (HAST) can really improve software analysis capabilities, by infusing the context of SAST and DAST together for better applications security situational awareness. There is no uber approach or tool!  We are seeing in other technology areas where vendors are opening up their platforms with APIs because customers are wanting better situational awareness across their technology investments to improve overall threat management. I see the same happening with the resurrection of static analysis and a shift from relying on a “black box” technology solution. 

About the Author

Kevin E. Greene

Public Sector CTO, OpenText Cybersecurity

Kevin E. Greene is a public sector expert at OpenText Cybersecurity. With more than 25 years of experience in cybersecurity, he is an experienced leader, champion, and advocate for advancing the state of art and practice around improving software security. He has been successful in leading federal funded research and development (R&D) and has a proven track record in tech transition and commercialization. Notably research from Hybrid Analysis Mapping (HAM) project was commercialized in former technologies/products by Secure Decisions’ Code Dx and Denim Group Thread Fix, which were acquired by Synopsis and Coal Fire respectively. Additional commercialization includes GrammaTech Code Sonar, KDM Analytics Blade platform and research transitioned to improve MITRE’s Common Weakness Enumeration (CWE) by incorporating architectural design issues from the Common Architectural Weakness Enumeration (CAWE) research project developed by Rochester Institute of Technology (RIT).

Prior to joining OpenText Cybersecurity, Kevin worked at the MITRE Corporation supporting DevSecOps initiatives for sponsors, Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) research under the Center for Threat Informed Defense (CTID), and high-performing contributor to MITRE’s CWE program. Kevin valued his time serving the nation as a federal employee at the Department of Homeland Security, Science and Technology Directory, Cyber Security division, where he was as program manager leading federal funded research in software security.

Kevin currently serves on the advisory board/committee for New Jersey Institute of Technology (NJIT) Cybersecurity Research Center where he holds both a Master of Science and Bachelor of Science in Information Systems; as well as Bowie State University Computer Technology department and Bryant University Cybersecurity/Cloud Program external advisory boards.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights