Application Security

3/15/2017
10:15 AM
Mike Pittenger
Mike Pittenger
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Security in the Age of Open Source

Dramatic changes in the use of open source software over the past decade demands major changes in security testing regimens today. Here's what you need to know and do about it.

There have been a lot of changes in recent years around how organizations build, deploy, and manage software, all focused on shortening development lifecycles. Agile development is focused on getting functional software to users more quickly. DevOps and containers are being adopted as a way to deploy applications more quickly, and simplify the management of production software.

The biggest change, however, is the adoption of open source. Ten years ago, most organizations avoided using open source. They were fearful of egregious licenses, and many didn't trust software that wasn’t built in-house. Today, it is rare to see software that doesn’t include open source. We embrace open source with good reason. It provides critical functionality we no longer need to build from scratch, lowering development costs while accelerating time to market. We frequently see in-house applications that are comprised of 75% or more open source.  Even commercial applications are increasingly based on open source. Our 2016 study, The State of Open Source Security in Commercial Applications, found that over 35% of the average commercial code base was open source, made up of over 100 distinct open source components.  Over a third of the code bases we examined were 40% or more open source. 

These dramatic changes in the use of open source require modifications to organizations' application security strategies. People understand that sending code under development to a separate security team for testing breaks the agile model, and that reuse of base-level containers risks propagating vulnerabilities in the Linux stack. What is less well-understood is how open source requires changes to our security testing regimens.

[Mike will be speaking about open source myths and perceptions during Interop ITX, May 15-19, at the MGM Grand in Las Vegas. To learn more about his presentation, other Interop security tracks, or to register click on the live links.]

Organizations can conduct a variety of security activities throughout the software development life cycle (SDLC), including security requirements, threat modeling, and using automated testing tools. These are all great for the code you write. However, traditional security testing tools like static and dynamic analysis have proved to be ineffective in identifying security issues in open source "in the wild." Heartbleed was present in OpenSSL for two years before it was found.  Shellshock was in Bash for over 25 years! Think of how many times applications using these components were subjected to static analysis, dynamic analysis, and pen tests during those times, without any of the tools (or people using the tools) noticing the bugs.

Don't get me wrong: static and dynamic analysis are great tools, if you understand what they are good at, and also what they miss. They undoubtedly help us all build more secure code by identifying coding errors that result in vulnerabilities. But, they aren’t capable of finding all classes of vulnerabilities, nor are they capable of finding all instances of vulnerabilities in the classes they do cover (Heartbleed as a buffer overflow). They just aren't good at finding vulnerabilities in open source – even those disclosed years ago. This could be because the vulnerabilities in open source are too complex for the tools, or because control and data flow are difficult to map in projects built by hundreds of developers over time. The end result is the same, however.

If traditional tools don't work, and open source is part of your code base, you need to adopt other controls. At a high level, these controls are very straightforward. You need visibility and information. The former is a list of the open source you're using in an application. The latter is ongoing information about the security status of each component. 

These are simple tasks on the surface, but difficult to control. Developers are accustomed to pulling in open source from internal repos, GitHub, SourceForge, and project home pages. Many times, they are less than diligent about documenting all of the open source in use, including transient dependencies (other open source components that components require to operate).  Open source is also likely entering the code base from reused internal components.  If developers are including out-sourced code or commercial components, open source is likely coming from these sources as well.

Once you have a complete list of components (including version levels), you need a reference source for security information (you should also check licensing information to make sure you’re not risking your own IP by using components under restrictive licenses improperly). The National Vulnerability Database (NVD) is a good starting point, allowing you to look up components by version number and view associated vulnerabilities. If you do this diligently, you can leverage all of the benefits of open source and mitigate the risk associated with using components with known vulnerabilities (OWASP Top Ten Item A9). 

That's a great first step, but what happens the day after you ship?

Security is not static. We need to track the ongoing security of the components we use. Since 2014, NVD has disclosed over 7,000 vulnerabilities in open source components. Not all of these are well publicized outside of NVD. We all know about Heartbleed, for example, but what about the 89 vulnerabilities reported in NVD for OpenSSL since Heartbleed? 

The point here is not that open source is less secure than commercial software, or more secure.  It's software, and therefore will have bugs and vulnerabilities. The controls we have used for the code we write are ineffective at identifying vulnerabilities in the code we don’t write – open source. As we continue to adopt open source in increasing volume, we need to maintain visibility into and control over it.

After all, you can't defend against a risk you don't know exists.

Related Content: 

 

Mike Pittenger has 30 years of experience in technology and business, more than 25 years of management experience, and 15 years in security. At Black Duck, he is responsible for strategic leadership of security solutions, including product direction. Pittenger's extensive ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Want Your Daughter to Succeed in Cyber? Call Her John
John De Santis, CEO, HyTrust,  5/16/2018
Don't Roll the Dice When Prioritizing Vulnerability Fixes
Ericka Chickowski, Contributing Writer, Dark Reading,  5/15/2018
Why Enterprises Can't Ignore Third-Party IoT-Related Risks
Charlie Miller, Senior Vice President, The Santa Fe Group,  5/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: "Security through obscurity"
Current Issue
How to Cope with the IT Security Skills Shortage
Most enterprises don't have all the in-house skills they need to meet the rising threat from online attackers. Here are some tips on ways to beat the shortage.
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11232
PUBLISHED: 2018-05-18
The etm_setup_aux function in drivers/hwtracing/coresight/coresight-etm-perf.c in the Linux kernel before 4.10.2 allows attackers to cause a denial of service (panic) because a parameter is incorrectly used as a local variable.
CVE-2017-15855
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, the camera application triggers "user-memory-access" issue as the Camera CPP module Linux driver directly accesses the application provided buffer, which resides in u...
CVE-2018-3567
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, a buffer overflow vulnerability exists in WLAN while processing the HTT_T2H_MSG_TYPE_PEER_MAP or HTT_T2H_MSG_TYPE_PEER_UNMAP messages.
CVE-2018-3568
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, in __wlan_hdd_cfg80211_vendor_scan(), a buffer overwrite can potentially occur.
CVE-2018-5827
PUBLISHED: 2018-05-17
In Qualcomm Android for MSM, Firefox OS for MSM, and QRD Android with all Android releases from CAF using the Linux kernel, a buffer overflow vulnerability exists in WLAN while processing an extscan hotlist event.