Ten personal observations that aim to bolster state-of-the-art and state-of-practice in application security.

Kevin E. Greene, Public Sector CTO, CyberRes, a Micro Focus line of business

September 20, 2017

6 Min Read

For the last five years or so, I have been actively engaging with the security community in academia, industry, and government to better understand the gaps that exist in software assurance. Working within the Department of Homeland Security's Science and Technology Directorate, I've discovered some interesting things about the community's drive to increase the adoption rate of both state-of-the-art and state-of-practice tools and capabilities. Here are my top 10 observations:

Observation 1: The state of practice is lagging.

  • There is no standard way to measure and baseline how well software assurance tools perform. We don't know what tools can and cannot do ... with some certainty.

  • The OWASP Top 10 lack the foundational science to advance AppSec practices in organizations, specifically in relation to the methodology for data collection and data analytics in formulating the OWASP Top 10. As Brian Glas from nVisium points out in his blog post "Musing on the OWASP Top 10 2017 RC1:" “The metrics collected for the Top 10 2017 represent what was found by either tools or time-boxed humans. It's a subset of vulnerabilities that are typically found, but are probably not representative of what is actually out there or the bigger risks that are faced." There is a lot of room for improvement, and I believe with RC2 and more involvement from the community, we can advance the OWASP Top 10 beyond its intended purpose to have a greater impact on advancing AppSec practices. 

  • NIST 800-53 is too network- and system-focused. There are security controls with software assurance applicability not included in any of the baselines (high, moderate, low), which means these security controls are not being tested as part of the certification and accreditation process.  

  • Secure coding practices are missing in action and are not being enforced religiously in AppSec programs. 

Observation 2: Threat modeling, when automated, is very powerful.

  • There is great potential in leveraging machine learning with threat modeling. This can be used to take a more proactive approach to software development which would help improve security designs and reduce overall security risks. 

  • In the future, I believe threat modeling will become the core engine for all security testing.

Observation 3: There are residual risks in using static analysis and security testing tools.

  • We don't know what the tools did not find.

  • We don't know what parts of the code and attack surface the tools were able to cover.

  • Static analysis struggles with opaque code. These are parts of the code not analyzable by static analysis. 

  • Static analysis tends to be shallow and oversimplified.

  • Heartbleed won against all static and many dynamic analysis tools.

Observation 4: False-positives — the proverbial pain in the rear end.

  • Many vendors would rather err on the side of caution by building products that tell you something is there versus tell you something is not there but actually is.

  • Tools lack context.

  • To be sound (low false-negative rate), there's a trade-off that will generate a considerable amount of noise (a lot of false-positives). This is the interesting trade-off with static analysis.

Observation 5: Patching does not scale — software assurance/secure coding is our first line of defense for protecting software.

  • The window of exposure is constantly sliding to the right.

  • Poor design and architectural decisions increases the need to patch (as seen with the Equifax Apache Struts breach). Some third-party software (i.e., frameworks) vulnerabilities are difficult to patch.

  • Human and social behaviors play a part because people resist change; we become the Achilles' heel of the software engineering process. Cybersecurity expert Dr. Diana Burley, a professor of human and organizational learning at George Washington University, credits, in part, "the rise of cyber attacks to the failure of the average computer user to take preventative measures — like patching."

  • The Internet of Things and Internet of Everything are proving that patching is becoming a lot harder for many different reasons, such as safety. I think we have more than 465,000 reasons

Observation 6: Poor tool performance creates barriers for tool adoption early in the software development process.

  • I often wonder why commercial and open source static analysis tools struggle with Juliet test cases.

  • An NSA tool study suggests that a given static analysis tool can find around 14% to 17% of weaknesses in Juliet test cases.

  • Some open source static analysis tools did just as good as, and in some cases better than, commercial ones on certain weakness classes and programming languages with Juliet.  

Observation 7: There is no uber tool — the sum of many is better than the sum of one...

  • Each tool has a sweet spot.

  • There are too many programming languages and weakness classes for one tool to be a jack of all trades.

  • Different testing methods find different things. 

  • I'm seeing a movement that is encouraging the use of multiple tools for security testing. For example, I'm member of a technical committee (Static Analysis Results Interchange Format) initiated by developers at Microsoft to push for a standard format to incorporate multiple tool outputs. 

Observation 8: More code equals more problems.

  • New cars today have at least 100 million lines of code — an increased attack surface. Often, more features mean more code, and more code leads to more complexity, which tends to lead to more problems. This is what software engineer Brian Knapp refers to as "software gravity" — the force that pulls features, complexity, and resources toward a software system over time.

  • Software is the new hardware.

  • With the explosion of IoT, software truly has become ubiquitous.

Observation 9: Technical debt increases software maintenance costs; organizations have no clue about the volume of technical debt they've accumulated.

  • Many take shortcuts, leading to poor design decisions that ultimately will create vulnerabilities.

  • Design debt, defect debt, and testing debt all contribute to the cost to maintain software. 

  • Frameworks like Struts require code changes and a considerable amount of testing, which increases the mean time to remediate, which increases the likelihood of technical debt. 

Observation 10: Foundational science is a key to forward-leaning capabilities

  • If we are not exploring, we are not advancing the state of art.

In the upcoming installment of this two-part series, Kevin Greene will share innovations that advance the state-of-art and the state-of-practice in application security. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

About the Author(s)

Kevin E. Greene

Public Sector CTO, CyberRes, a Micro Focus line of business

Kevin is a strong advocate, champion for advancing and improving software security practices. Kevin has over 25 years of combined public and private sector expertise within cybersecurity.  In his current role as Public Sector CTO for CyberRes, a Micro Focus line of business, Kevin enjoys helping organizations build cyber resiliency capabilities in their operational environments to protect missions and businessesfrom the effects of cyberattacks.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights