Security in the Age of Open Source

Dramatic changes in the use of open source software over the past decade demands major changes in security testing regimens today. Here's what you need to know and do about it.

Mike Pittenger, Vice President, Security Strategy at Black Duck Software

March 15, 2017

5 Min Read

There have been a lot of changes in recent years around how organizations build, deploy, and manage software, all focused on shortening development lifecycles. Agile development is focused on getting functional software to users more quickly. DevOps and containers are being adopted as a way to deploy applications more quickly, and simplify the management of production software.

The biggest change, however, is the adoption of open source. Ten years ago, most organizations avoided using open source. They were fearful of egregious licenses, and many didn't trust software that wasn’t built in-house. Today, it is rare to see software that doesn’t include open source. We embrace open source with good reason. It provides critical functionality we no longer need to build from scratch, lowering development costs while accelerating time to market. We frequently see in-house applications that are comprised of 75% or more open source.  Even commercial applications are increasingly based on open source. Our 2016 study, The State of Open Source Security in Commercial Applications, found that over 35% of the average commercial code base was open source, made up of over 100 distinct open source components.  Over a third of the code bases we examined were 40% or more open source. 

These dramatic changes in the use of open source require modifications to organizations' application security strategies. People understand that sending code under development to a separate security team for testing breaks the agile model, and that reuse of base-level containers risks propagating vulnerabilities in the Linux stack. What is less well-understood is how open source requires changes to our security testing regimens.

[Mike will be speaking about open source myths and perceptions during Interop ITX, May 15-19, at the MGM Grand in Las Vegas. To learn more about his presentation, other Interop security tracks, or to register click on the live links.]

Organizations can conduct a variety of security activities throughout the software development life cycle (SDLC), including security requirements, threat modeling, and using automated testing tools. These are all great for the code you write. However, traditional security testing tools like static and dynamic analysis have proved to be ineffective in identifying security issues in open source "in the wild." Heartbleed was present in OpenSSL for two years before it was found.  Shellshock was in Bash for over 25 years! Think of how many times applications using these components were subjected to static analysis, dynamic analysis, and pen tests during those times, without any of the tools (or people using the tools) noticing the bugs.

Don't get me wrong: static and dynamic analysis are great tools, if you understand what they are good at, and also what they miss. They undoubtedly help us all build more secure code by identifying coding errors that result in vulnerabilities. But, they aren’t capable of finding all classes of vulnerabilities, nor are they capable of finding all instances of vulnerabilities in the classes they do cover (Heartbleed as a buffer overflow). They just aren't good at finding vulnerabilities in open source – even those disclosed years ago. This could be because the vulnerabilities in open source are too complex for the tools, or because control and data flow are difficult to map in projects built by hundreds of developers over time. The end result is the same, however.

If traditional tools don't work, and open source is part of your code base, you need to adopt other controls. At a high level, these controls are very straightforward. You need visibility and information. The former is a list of the open source you're using in an application. The latter is ongoing information about the security status of each component. 

These are simple tasks on the surface, but difficult to control. Developers are accustomed to pulling in open source from internal repos, GitHub, SourceForge, and project home pages. Many times, they are less than diligent about documenting all of the open source in use, including transient dependencies (other open source components that components require to operate).  Open source is also likely entering the code base from reused internal components.  If developers are including out-sourced code or commercial components, open source is likely coming from these sources as well.

Once you have a complete list of components (including version levels), you need a reference source for security information (you should also check licensing information to make sure you’re not risking your own IP by using components under restrictive licenses improperly). The National Vulnerability Database (NVD) is a good starting point, allowing you to look up components by version number and view associated vulnerabilities. If you do this diligently, you can leverage all of the benefits of open source and mitigate the risk associated with using components with known vulnerabilities (OWASP Top Ten Item A9). 

That's a great first step, but what happens the day after you ship?

Security is not static. We need to track the ongoing security of the components we use. Since 2014, NVD has disclosed over 7,000 vulnerabilities in open source components. Not all of these are well publicized outside of NVD. We all know about Heartbleed, for example, but what about the 89 vulnerabilities reported in NVD for OpenSSL since Heartbleed? 

 More on Security Live at Interop ITX More on Security
Live at Interop ITX

The point here is not that open source is less secure than commercial software, or more secure.  It's software, and therefore will have bugs and vulnerabilities. The controls we have used for the code we write are ineffective at identifying vulnerabilities in the code we don’t write – open source. As we continue to adopt open source in increasing volume, we need to maintain visibility into and control over it.

After all, you can't defend against a risk you don't know exists.

Related Content: 

 

About the Author(s)

Mike Pittenger

Vice President, Security Strategy at Black Duck Software

Mike Pittenger has 30 years of experience in technology and business, more than 25 years of management experience, and 15 years in security. At Black Duck, he is responsible for strategic leadership of security solutions, including product direction. Pittenger's extensive security industry background and experience has helped further deliver solutions which help companies mitigate security risks associated with the use open source software, while integrating the portfolio of security solutions that most large companies employ. Pittenger previously served as Vice President and General Manager of the product division of @stake. After @stake's acquisition by Symantec, Pittenger led the spin-out of his team to form Veracode. He later served as Vice President of the product and training division of Cigital. For the past several years, he has consulted independently, helping security companies identify, define and prioritize the benefit to customers of their technologies, structure solutions appropriately and bring those offerings to market.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights