Certifying Software: Why We’re Not There Yet

Finding a solution to the software security and hygiene problem will take more than an Underwriter’s Lab seal of approval.

Kevin E. Greene, Public Sector CTO, OpenText Cybersecurity

October 12, 2016

6 Min Read

There’s no arguing with the fact that acquirers of software need assurances that the software they purchase is safe and stable to use.  However, I struggle with the notion that analyzing software and assigning a pass/fail rating is the best solution, given that many state-of-the-art software assurance tools, technologies and capabilities have not kept pace with the complexity and size of modern software. Of particular concern to me are the challenges in performance, precision, and soundness of many static analysis tools, both open-source and commercial. 

Static analysis is listed by Underwriter’s Lab as one of the assessments that will be used to identify weaknesses in software, along with other activities such as fuzz testing, evaluation of known vulnerabilities, hunting for malware, and static binary analysis. While this all make sense, there is a dirty little secret about static analysis tools that is largely ignored: there is a residual risk in using any of these tools. 

The problem with residual risk is that we don’t know what parts of the software the tools were not able to analyze due to ‘opaqueness,’ parts of the code that are written so poorly that tools can’t understand. We also don’t know what percentage of actual weaknesses is correctly reported by the tools, and when, with certainty, these tools start to oversimplify and miss true bugs due to performance issues. My point is that there needs to be some ground truth established for software assurance tools that both sets a baseline and also measures what the tools can and cannot do. 

Federal-funded Programs: Trying to Move the Needle
There have been several attempts by the NSA Center for Assured Software, NIST, and other federal-funded programs to conduct tool studies to better understand areas where modernization is needed. DHS Science &Technology Static Tool Analysis Modernization (STAMP) research project will not only modernize a list of open-source static analysis tools, but it will also offer a structured way of measuring static analysis tools, and provide a ‘consumer report’ to identify the strengths and weaknesses of each tool. 

The government took the lead in addressing this problem by conducting these studies, but, in many cases, researchers were restricted from sharing the results due to license agreements or DeWitt clauses which prevents distributing the test results with the security community at large.  Many commercial vendors also refused to share their results as part of their participation in NIST’s Static Analysis Tool Exposition (SATE)

I personally believe that sharing tool study results (in a very healthy way) will ultimately stimulate competition and innovation, and improve software assurance. Georgetown University, for example, recently conducted a study of license agreements and similar clauses in a white paper titled, “Vendor Truth Serum,” which explores how to share results from tool studies in a way that helps organizations make informed choices, accomplish satisfactory testing in minimal time at minimal expense, and choose the right set of tools that best match the software under test.   

The Role of Secure Architecture and Design
While many of the technologies that are called out in the UL 2900 series (You have to pay for the documents.) are behind the power curve of innovation, it’s important to carefully consider secure design, architectural and engineering principles early in the process before a system is developed because implementing poor architecture decisions will introduce security flaws which often lead to security breaches.  A Carnegie Mellon study suggests that over 90% of security breaches can be traced back to poorly designed and developed systems.  

Mehdi Mirakholi, PhD., assistant professor in the department of software engineering at Rochester Institute of Technology, believes the problem of security architecture erosion is exacerbated by the fact that popular software engineering tools and environments fail to integrate architecture analysis and security thinking into developers’ daily activities. As a result, programmers are not kept fully informed of the consequences of their programming choices or refactoring activities on the security design of the system.

For instance, a third-party library with good security hygiene used in the wrong way could be as bad or worse than using a flawed third-party library. Understanding the secure design principles of a system or application is important because it gives context to how security tactics and features should be implemented to enforce security.  An in-depth assessment of a system design and architecture is a critical component, especially when there are plans of tying in the software liability component to UL certification. With the emergence of Internet of Things (IoT), secure design becomes very important, as seen with the recent discovery of security flaws in Johnson & Johnson insulin pumps

I would also like to see a greater emphasis on secure design reviews and improved strategies to assess whether are not security is actually “built-in.” Testing and assessing the system using architecture and design contexts will help guide the use of static and dynamic analysis, malware hunting, static binary analysis, and fuzz testing.   

Forward Leaning with more Innovation
Overall I like the concept of certifying software, or some sort of variation of a consumer report for software security like the one that Mudge and Sarah Zatko developed. The consumer report format gives the user or acquirer the information to make informed decisions and quantitatively compare different products.  My hesitation is with the science and innovation around the technologies, and capabilities of software assurance tools. As a program manager in software assurance R&D, I work with computer scientists from industry and academia and understand how far behind software assurance tools are in keeping pace with modern software. 

One of my researchers, Henny Sipma, Ph.D, suggests that static analysis capabilities are 15 years behind. Other computer scientists and researchers such as Bart Miller and James Kupsch described issues in static analysis as far back as April 2014, when none of the existing tools were able to find the weakness that exposed the Heartbleed vulnerability

We need more science and innovation in not only static analysis, but automated tools that are designed to look for bad things in software.  According to Mirakholi, “an analysis of reported CVEs suggests that roughly 50% of security problems are the result of software design flaw, poor architectural implementation, violation of design principals in source code and degradations of security architecture. Unfortunately, such problems are exacerbated by the fact that current tools do not provide enough architecture-centric analysis to detect erosion of security architecture in the code and generate appropriate recommendations for the developers on how to fix these design issues."

It’s definitely time to raise the bar with better tech innovation and novelty so that movements and initiatives like UL certification can help shape a healthier and safer software world.

Related Content:


About the Author(s)

Kevin E. Greene

Public Sector CTO, OpenText Cybersecurity

Kevin E. Greene is a public sector expert at OpenText Cybersecurity. With more than 25 years of experience in cybersecurity, he is an experienced leader, champion, and advocate for advancing the state of art and practice around improving software security. He has been successful in leading federal funded research and development (R&D) and has a proven track record in tech transition and commercialization. Notably research from Hybrid Analysis Mapping (HAM) project was commercialized in former technologies/products by Secure Decisions’ Code Dx and Denim Group Thread Fix, which were acquired by Synopsis and Coal Fire respectively. Additional commercialization includes GrammaTech Code Sonar, KDM Analytics Blade platform and research transitioned to improve MITRE’s Common Weakness Enumeration (CWE) by incorporating architectural design issues from the Common Architectural Weakness Enumeration (CAWE) research project developed by Rochester Institute of Technology (RIT).

Prior to joining OpenText Cybersecurity, Kevin worked at the MITRE Corporation supporting DevSecOps initiatives for sponsors, Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) research under the Center for Threat Informed Defense (CTID), and high-performing contributor to MITRE’s CWE program. Kevin valued his time serving the nation as a federal employee at the Department of Homeland Security, Science and Technology Directory, Cyber Security division, where he was as program manager leading federal funded research in software security.

Kevin currently serves on the advisory board/committee for New Jersey Institute of Technology (NJIT) Cybersecurity Research Center where he holds both a Master of Science and Bachelor of Science in Information Systems; as well as Bowie State University Computer Technology department and Bryant University Cybersecurity/Cloud Program external advisory boards.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights