Application Security

02:00 PM
Kevin E. Greene
Kevin E. Greene
Connect Directly
E-Mail vvv

Software Assurance: Thinking Back, Looking Forward

Ten personal observations that aim to bolster state-of-the-art and state-of-practice in application security.

For the last five years or so, I have been actively engaging with the security community in academia, industry, and government to better understand the gaps that exist in software assurance. Working within the Department of Homeland Security's Science and Technology Directorate, I've discovered some interesting things about the community's drive to increase the adoption rate of both state-of-the-art and state-of-practice tools and capabilities. Here are my top 10 observations:

Observation 1: The state of practice is lagging.

  • There is no standard way to measure and baseline how well software assurance tools perform. We don't know what tools can and cannot do ... with some certainty.
  • The OWASP Top 10 lack the foundational science to advance AppSec practices in organizations, specifically in relation to the methodology for data collection and data analytics in formulating the OWASP Top 10. As Brian Glas from nVisium points out in his blog post "Musing on the OWASP Top 10 2017 RC1:" “The metrics collected for the Top 10 2017 represent what was found by either tools or time-boxed humans. It's a subset of vulnerabilities that are typically found, but are probably not representative of what is actually out there or the bigger risks that are faced." There is a lot of room for improvement, and I believe with RC2 and more involvement from the community, we can advance the OWASP Top 10 beyond its intended purpose to have a greater impact on advancing AppSec practices. 
  • NIST 800-53 is too network- and system-focused. There are security controls with software assurance applicability not included in any of the baselines (high, moderate, low), which means these security controls are not being tested as part of the certification and accreditation process.  
  • Secure coding practices are missing in action and are not being enforced religiously in AppSec programs. 

Observation 2: Threat modeling, when automated, is very powerful.

  • There is great potential in leveraging machine learning with threat modeling. This can be used to take a more proactive approach to software development which would help improve security designs and reduce overall security risks. 
  • In the future, I believe threat modeling will become the core engine for all security testing.

Observation 3: There are residual risks in using static analysis and security testing tools.

  • We don't know what the tools did not find.
  • We don't know what parts of the code and attack surface the tools were able to cover.
  • Static analysis struggles with opaque code. These are parts of the code not analyzable by static analysis. 
  • Static analysis tends to be shallow and oversimplified.
  • Heartbleed won against all static and many dynamic analysis tools.

Observation 4: False-positives — the proverbial pain in the rear end.

  • Many vendors would rather err on the side of caution by building products that tell you something is there versus tell you something is not there but actually is.
  • Tools lack context.
  • To be sound (low false-negative rate), there's a trade-off that will generate a considerable amount of noise (a lot of false-positives). This is the interesting trade-off with static analysis.

Observation 5: Patching does not scale  software assurance/secure coding is our first line of defense for protecting software.

  • The window of exposure is constantly sliding to the right.
  • Poor design and architectural decisions increases the need to patch (as seen with the Equifax Apache Struts breach). Some third-party software (i.e., frameworks) vulnerabilities are difficult to patch.
  • Human and social behaviors play a part because people resist change; we become the Achilles' heel of the software engineering process. Cybersecurity expert Dr. Diana Burley, a professor of human and organizational learning at George Washington University, credits, in part, "the rise of cyber attacks to the failure of the average computer user to take preventative measures — like patching."
  • The Internet of Things and Internet of Everything are proving that patching is becoming a lot harder for many different reasons, such as safety. I think we have more than 465,000 reasons

Observation 6: Poor tool performance creates barriers for tool adoption early in the software development process.

  • I often wonder why commercial and open source static analysis tools struggle with Juliet test cases.
  • An NSA tool study suggests that a given static analysis tool can find around 14% to 17% of weaknesses in Juliet test cases.
  • Some open source static analysis tools did just as good as, and in some cases better than, commercial ones on certain weakness classes and programming languages with Juliet.  

Observation 7: There is no uber tool  the sum of many is better than the sum of one...

  • Each tool has a sweet spot.
  • There are too many programming languages and weakness classes for one tool to be a jack of all trades.
  • Different testing methods find different things. 
  • I'm seeing a movement that is encouraging the use of multiple tools for security testing. For example, I'm member of a technical committee (Static Analysis Results Interchange Format) initiated by developers at Microsoft to push for a standard format to incorporate multiple tool outputs. 

Observation 8: More code equals more problems.

  • New cars today have at least 100 million lines of code — an increased attack surface. Often, more features mean more code, and more code leads to more complexity, which tends to lead to more problems. This is what software engineer Brian Knapp refers to as "software gravity" — the force that pulls features, complexity, and resources toward a software system over time.
  • Software is the new hardware.
  • With the explosion of IoT, software truly has become ubiquitous.

Observation 9: Technical debt increases software maintenance costs; organizations have no clue about the volume of technical debt they've accumulated.

  • Many take shortcuts, leading to poor design decisions that ultimately will create vulnerabilities.
  • Design debt, defect debt, and testing debt all contribute to the cost to maintain software. 
  • Frameworks like Struts require code changes and a considerable amount of testing, which increases the mean time to remediate, which increases the likelihood of technical debt. 

Observation 10: Foundational science is a key to forward-leaning capabilities

  • If we are not exploring, we are not advancing the state of art.

In the upcoming installment of this two-part series, Kevin Greene will share innovations that advance the state-of-art and the state-of-practice in application security. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kevin Greene is a thought leader in the area of software security assurance. He currently serves on the advisory board for New Jersey Institute of Technology (NJIT) Cybersecurity Research Center, and Bowie State University's Computer Science department. Kevin has been very ... View Full Bio
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Who Does What in Cybersecurity at the C-Level
Steve Zurier, Freelance Writer,  3/16/2018
New 'Mac-A-Mal' Tool Automates Mac Malware Hunting & Analysis
Kelly Jackson Higgins, Executive Editor at Dark Reading,  3/14/2018
(ISC)2 Report: Glaring Disparity in Diversity for US Cybersecurity
Kelly Jackson Higgins, Executive Editor at Dark Reading,  3/15/2018
Register for Dark Reading Newsletters
White Papers
Current Issue
How to Cope with the IT Security Skills Shortage
Most enterprises don't have all the in-house skills they need to meet the rising threat from online attackers. Here are some tips on ways to beat the shortage.
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.