Application Security
3/15/2017
10:15 AM
Mike Pittenger
Mike Pittenger
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Security in the Age of Open Source

Dramatic changes in the use of open source software over the past decade demands major changes in security testing regimens today. Here's what you need to know and do about it.

There have been a lot of changes in recent years around how organizations build, deploy, and manage software, all focused on shortening development lifecycles. Agile development is focused on getting functional software to users more quickly. DevOps and containers are being adopted as a way to deploy applications more quickly, and simplify the management of production software.

The biggest change, however, is the adoption of open source. Ten years ago, most organizations avoided using open source. They were fearful of egregious licenses, and many didn't trust software that wasn’t built in-house. Today, it is rare to see software that doesn’t include open source. We embrace open source with good reason. It provides critical functionality we no longer need to build from scratch, lowering development costs while accelerating time to market. We frequently see in-house applications that are comprised of 75% or more open source.  Even commercial applications are increasingly based on open source. Our 2016 study, The State of Open Source Security in Commercial Applications, found that over 35% of the average commercial code base was open source, made up of over 100 distinct open source components.  Over a third of the code bases we examined were 40% or more open source. 

These dramatic changes in the use of open source require modifications to organizations' application security strategies. People understand that sending code under development to a separate security team for testing breaks the agile model, and that reuse of base-level containers risks propagating vulnerabilities in the Linux stack. What is less well-understood is how open source requires changes to our security testing regimens.

[Mike will be speaking about open source myths and perceptions during Interop ITX, May 15-19, at the MGM Grand in Las Vegas. To learn more about his presentation, other Interop security tracks, or to register click on the live links.]

Organizations can conduct a variety of security activities throughout the software development life cycle (SDLC), including security requirements, threat modeling, and using automated testing tools. These are all great for the code you write. However, traditional security testing tools like static and dynamic analysis have proved to be ineffective in identifying security issues in open source "in the wild." Heartbleed was present in OpenSSL for two years before it was found.  Shellshock was in Bash for over 25 years! Think of how many times applications using these components were subjected to static analysis, dynamic analysis, and pen tests during those times, without any of the tools (or people using the tools) noticing the bugs.

Don't get me wrong: static and dynamic analysis are great tools, if you understand what they are good at, and also what they miss. They undoubtedly help us all build more secure code by identifying coding errors that result in vulnerabilities. But, they aren’t capable of finding all classes of vulnerabilities, nor are they capable of finding all instances of vulnerabilities in the classes they do cover (Heartbleed as a buffer overflow). They just aren't good at finding vulnerabilities in open source – even those disclosed years ago. This could be because the vulnerabilities in open source are too complex for the tools, or because control and data flow are difficult to map in projects built by hundreds of developers over time. The end result is the same, however.

If traditional tools don't work, and open source is part of your code base, you need to adopt other controls. At a high level, these controls are very straightforward. You need visibility and information. The former is a list of the open source you're using in an application. The latter is ongoing information about the security status of each component. 

These are simple tasks on the surface, but difficult to control. Developers are accustomed to pulling in open source from internal repos, GitHub, SourceForge, and project home pages. Many times, they are less than diligent about documenting all of the open source in use, including transient dependencies (other open source components that components require to operate).  Open source is also likely entering the code base from reused internal components.  If developers are including out-sourced code or commercial components, open source is likely coming from these sources as well.

Once you have a complete list of components (including version levels), you need a reference source for security information (you should also check licensing information to make sure you’re not risking your own IP by using components under restrictive licenses improperly). The National Vulnerability Database (NVD) is a good starting point, allowing you to look up components by version number and view associated vulnerabilities. If you do this diligently, you can leverage all of the benefits of open source and mitigate the risk associated with using components with known vulnerabilities (OWASP Top Ten Item A9). 

That's a great first step, but what happens the day after you ship?

Security is not static. We need to track the ongoing security of the components we use. Since 2014, NVD has disclosed over 7,000 vulnerabilities in open source components. Not all of these are well publicized outside of NVD. We all know about Heartbleed, for example, but what about the 89 vulnerabilities reported in NVD for OpenSSL since Heartbleed? 

The point here is not that open source is less secure than commercial software, or more secure.  It's software, and therefore will have bugs and vulnerabilities. The controls we have used for the code we write are ineffective at identifying vulnerabilities in the code we don’t write – open source. As we continue to adopt open source in increasing volume, we need to maintain visibility into and control over it.

After all, you can't defend against a risk you don't know exists.

Related Content: 

 

Mike Pittenger has 30 years of experience in technology and business, more than 25 years of management experience, and 15 years in security. At Black Duck, he is responsible for strategic leadership of security solutions, including product direction. Pittenger's extensive ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-7445
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

CVE-2015-4948
Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

CVE-2015-5660
Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

CVE-2015-6003
Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

CVE-2015-6333
Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.