Application Security

9/20/2017
02:00 PM
Kevin E. Greene
Kevin E. Greene
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Software Assurance: Thinking Back, Looking Forward

Ten personal observations that aim to bolster state-of-the-art and state-of-practice in application security.

For the last five years or so, I have been actively engaging with the security community in academia, industry, and government to better understand the gaps that exist in software assurance. Working within the Department of Homeland Security's Science and Technology Directorate, I've discovered some interesting things about the community's drive to increase the adoption rate of both state-of-the-art and state-of-practice tools and capabilities. Here are my top 10 observations:

Observation 1: The state of practice is lagging.

  • There is no standard way to measure and baseline how well software assurance tools perform. We don't know what tools can and cannot do ... with some certainty.
  • The OWASP Top 10 lack the foundational science to advance AppSec practices in organizations, specifically in relation to the methodology for data collection and data analytics in formulating the OWASP Top 10. As Brian Glas from nVisium points out in his blog post "Musing on the OWASP Top 10 2017 RC1:" “The metrics collected for the Top 10 2017 represent what was found by either tools or time-boxed humans. It's a subset of vulnerabilities that are typically found, but are probably not representative of what is actually out there or the bigger risks that are faced." There is a lot of room for improvement, and I believe with RC2 and more involvement from the community, we can advance the OWASP Top 10 beyond its intended purpose to have a greater impact on advancing AppSec practices. 
  • NIST 800-53 is too network- and system-focused. There are security controls with software assurance applicability not included in any of the baselines (high, moderate, low), which means these security controls are not being tested as part of the certification and accreditation process.  
  • Secure coding practices are missing in action and are not being enforced religiously in AppSec programs. 

Observation 2: Threat modeling, when automated, is very powerful.

  • There is great potential in leveraging machine learning with threat modeling. This can be used to take a more proactive approach to software development which would help improve security designs and reduce overall security risks. 
  • In the future, I believe threat modeling will become the core engine for all security testing.

Observation 3: There are residual risks in using static analysis and security testing tools.

  • We don't know what the tools did not find.
  • We don't know what parts of the code and attack surface the tools were able to cover.
  • Static analysis struggles with opaque code. These are parts of the code not analyzable by static analysis. 
  • Static analysis tends to be shallow and oversimplified.
  • Heartbleed won against all static and many dynamic analysis tools.

Observation 4: False-positives — the proverbial pain in the rear end.

  • Many vendors would rather err on the side of caution by building products that tell you something is there versus tell you something is not there but actually is.
  • Tools lack context.
  • To be sound (low false-negative rate), there's a trade-off that will generate a considerable amount of noise (a lot of false-positives). This is the interesting trade-off with static analysis.

Observation 5: Patching does not scale  software assurance/secure coding is our first line of defense for protecting software.

  • The window of exposure is constantly sliding to the right.
  • Poor design and architectural decisions increases the need to patch (as seen with the Equifax Apache Struts breach). Some third-party software (i.e., frameworks) vulnerabilities are difficult to patch.
  • Human and social behaviors play a part because people resist change; we become the Achilles' heel of the software engineering process. Cybersecurity expert Dr. Diana Burley, a professor of human and organizational learning at George Washington University, credits, in part, "the rise of cyber attacks to the failure of the average computer user to take preventative measures — like patching."
  • The Internet of Things and Internet of Everything are proving that patching is becoming a lot harder for many different reasons, such as safety. I think we have more than 465,000 reasons

Observation 6: Poor tool performance creates barriers for tool adoption early in the software development process.

  • I often wonder why commercial and open source static analysis tools struggle with Juliet test cases.
  • An NSA tool study suggests that a given static analysis tool can find around 14% to 17% of weaknesses in Juliet test cases.
  • Some open source static analysis tools did just as good as, and in some cases better than, commercial ones on certain weakness classes and programming languages with Juliet.  

Observation 7: There is no uber tool  the sum of many is better than the sum of one...

  • Each tool has a sweet spot.
  • There are too many programming languages and weakness classes for one tool to be a jack of all trades.
  • Different testing methods find different things. 
  • I'm seeing a movement that is encouraging the use of multiple tools for security testing. For example, I'm member of a technical committee (Static Analysis Results Interchange Format) initiated by developers at Microsoft to push for a standard format to incorporate multiple tool outputs. 

Observation 8: More code equals more problems.

  • New cars today have at least 100 million lines of code — an increased attack surface. Often, more features mean more code, and more code leads to more complexity, which tends to lead to more problems. This is what software engineer Brian Knapp refers to as "software gravity" — the force that pulls features, complexity, and resources toward a software system over time.
  • Software is the new hardware.
  • With the explosion of IoT, software truly has become ubiquitous.

Observation 9: Technical debt increases software maintenance costs; organizations have no clue about the volume of technical debt they've accumulated.

  • Many take shortcuts, leading to poor design decisions that ultimately will create vulnerabilities.
  • Design debt, defect debt, and testing debt all contribute to the cost to maintain software. 
  • Frameworks like Struts require code changes and a considerable amount of testing, which increases the mean time to remediate, which increases the likelihood of technical debt. 

Observation 10: Foundational science is a key to forward-leaning capabilities

  • If we are not exploring, we are not advancing the state of art.

In the upcoming installment of this two-part series, Kevin Greene will share innovations that advance the state-of-art and the state-of-practice in application security. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kevin Greene is a thought leader in the area of software security assurance. He currently serves on the advisory board for New Jersey Institute of Technology (NJIT) Cybersecurity Research Center, and Bowie State University's Computer Science department. Kevin has been very ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
White House Cybersecurity Strategy at a Crossroads
Kelly Jackson Higgins, Executive Editor at Dark Reading,  7/17/2018
The Fundamental Flaw in Security Awareness Programs
Ira Winkler, CISSP, President, Secure Mentem,  7/19/2018
Number of Retailers Impacted by Breaches Doubles
Ericka Chickowski, Contributing Writer, Dark Reading,  7/19/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-14505
PUBLISHED: 2018-07-22
mitmweb in mitmproxy v4.0.3 allows DNS Rebinding attacks, related to tools/web/app.py.
CVE-2018-14500
PUBLISHED: 2018-07-22
joyplus-cms 1.6.0 has XSS via the manager/collect/collect_vod_zhuiju.php keyword parameter.
CVE-2018-14501
PUBLISHED: 2018-07-22
manager/admin_ajax.php in joyplus-cms 1.6.0 has SQL Injection, as demonstrated by crafted POST data beginning with an "m_id=1 AND SLEEP(5)" substring.
CVE-2018-14492
PUBLISHED: 2018-07-21
Tenda AC7 through V15.03.06.44_CN, AC9 through V15.03.05.19(6318)_CN, and AC10 through V15.03.06.23_CN devices have a Stack-based Buffer Overflow via a long limitSpeed or limitSpeedup parameter to an unspecified /goform URI.
CVE-2018-3770
PUBLISHED: 2018-07-20
A path traversal exists in markdown-pdf version <9.0.0 that allows a user to insert a malicious html code that can result in reading the local files.