3/19/2015
10:30 AM
Bill Ledingham
Bill Ledingham
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv

Risky Business: Why Monitoring Vulnerability Data Is Never Enough

Keeping tabs on open source code used in your organization's applications and infrastructure is daunting, especially if you are relying solely on manual methods.



Open source software has fueled the rise of the cloud, big data, and mobile applications. The likes of Google, Facebook, and Amazon run predominantly on open source, and this evolution is also playing out across the rest of the enterprise. Gartner estimates that by 2016, open source will be included in mission-critical applications within 99 percent of Global 2000 companies.

How can organizations manage the security implications of this rising influx of open source code? It’s a complex issue. There are many online sources of information about security vulnerabilities and, clearly, these resources are extremely important because the threat landscape is constantly in flux. It might be tempting for organizations to attempt to manage code security simply by monitoring these sources. However, the reality is more complicated. This approach can leave organizations, their critical data, and their products vulnerable to security threats.

Understanding your code case: a problem of scope
For most companies, open source is pervasive. Open source code exists in customer-facing applications, operating systems, infrastructure, and more. For example, more than half of active websites use one or more components of the “LAMP” stack (i.e., the Linux operating system, Apache web server, MySQL database, and PHP scripting language), all of which is open source. Open source software can enter an organization through many different paths.

Within internal development teams, developers may download and embed open source directly into the applications they develop. Third-party software (both as standalone products and as libraries or components embedded in other software) also contains open source. Indeed, software supply chains are growing increasingly complex, as a final product may consist of software from many different vendors, all of which may contain open source. What’s more, open source can find its way into an organization via IT operations, where it may be used extensively as parts of the platforms and tools making up a company’s production environment.

There are many reasons for the popularity of open source, and development organizations’ motivations will vary. But it’s clear that the major motivator is the fact that using existing open source components instead of building new ones from scratch saves developers time and money. So while there are many benefits to the use of open source, there are some potential risks as well – namely, security vulnerabilities that may be present in various open source components. To understand the level of exposure, an organization must be able to map known vulnerabilities to the open source software that it is using, as well as deal with unknown vulnerabilities that exist today when they are discovered at some point in the future.

This raises the question: how does an organization keep track? For the average organization, keeping tabs on what and where open source is in use can be a daunting challenge, especially if relying solely on manual methods.

Further complicating the issue is the fact that most of today’s software is in a constant state of change, with additional code that must be vetted for security vulnerabilities being introduced on a regular basis. In addition, whereas new software releases may be “clean,” vulnerabilities may exist in older versions of software that are still in use within the organization. Keeping track of the various versions of software in use is a challenge in itself, let alone the process of mapping vulnerabilities to all those different software components and versions. The bottom line is that a company’s code base is truly a moving target, and continually evolving security vulnerabilities lurk in places that often can’t be easily tracked by its development team.

Avoiding the next Heartbleed or Shellshock
The Heartbleed vulnerability in the widely-used OpenSSL library offered a stark reminder of the challenges of insufficient visibility into software code bases. While Heartbleed (CVE-2014-0160) received all the notoriety, 24 additional vulnerabilities have been reported against OpenSSL since Heartbleed first came to light. This is understandable – Heartbleed raised awareness and focused more eyes on the problem. The challenge for organizations is how to keep up with all the new versions that are released to address these vulnerabilities. It’s not merely a situation of patch once and forget about it. Rather, organizations must deal with an ongoing stream of patches and new releases, multiplied by all the different places where the software component is used.

Other recent vulnerabilities such as Shellshock, a flaw in the Bash software component, and GHOST, a flaw in the GNU C Library, continue to reinforce the magnitude of the problem. When vulnerabilities like these occur, organizations face simple but critical questions: “Am I exposed by these vulnerabilities, and if so, what applications or systems are exposed?”

Knowledge leads to effective remediation and reduced risk
These events have led to an increased sense of urgency to monitor and manage open source security. Best practices start with having a governance process for the use of open source within an organization. A first priority is often implementing automated solutions that provide visibility into what open source components are in use – both in production software and in software still in development. These tools and processes provide security professionals with the visibility to understand and mitigate vulnerabilities. For example, by scanning code as part of a nightly build process, development teams can understand what open source software components and versions they are using and know whether vulnerabilities are contained within that software. Mitigation can often be as simple as upgrading to a newer version of a component before the application is released.

Since there will never be enough time to address all vulnerabilities, the key is to prioritize, focusing limited resources on the issues presenting the greatest risk. Given that new vulnerabilities will be identified in the future, it’s equally important to continually monitor these new issues against an up-to-date inventory of the open source in use within an organization’s application portfolio.

Armed with this information, security teams, developers, and build teams can make the best-informed decisions about mitigation priorities and plans. Security vulnerabilities are a prevalent issue in software today. It’s more important than ever to implement automated controls that help organizations to quickly mitigate risk as new vulnerabilities are identified, and ideally, before they are exploited.

Gartner predicts that between now and 2020, “security and quality defects publicly attributed to open source software projects will increase significantly, driven by a growing presence within high-profile, mission-critical, and mainstream IT workloads.” A proactive approach to this dilemma saves companies time and expense, as well as significantly reducing operational and security risks.

Bill brings over 30 years of technology and security experience to his role as Chief Technology Officer (CTO) and Executive Vice President of Engineering at Black Duck Software. Previously, Bill was CTO of Verdasys, a leader in information and cyber security. Prior to ... View Full Bio
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service