Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

10:30 AM
Bill Ledingham
Bill Ledingham
Connect Directly
E-Mail vvv

Risky Business: Why Monitoring Vulnerability Data Is Never Enough

Keeping tabs on open source code used in your organization's applications and infrastructure is daunting, especially if you are relying solely on manual methods.

Open source software has fueled the rise of the cloud, big data, and mobile applications. The likes of Google, Facebook, and Amazon run predominantly on open source, and this evolution is also playing out across the rest of the enterprise. Gartner estimates that by 2016, open source will be included in mission-critical applications within 99 percent of Global 2000 companies.

How can organizations manage the security implications of this rising influx of open source code? It’s a complex issue. There are many online sources of information about security vulnerabilities and, clearly, these resources are extremely important because the threat landscape is constantly in flux. It might be tempting for organizations to attempt to manage code security simply by monitoring these sources. However, the reality is more complicated. This approach can leave organizations, their critical data, and their products vulnerable to security threats.

Understanding your code case: a problem of scope
For most companies, open source is pervasive. Open source code exists in customer-facing applications, operating systems, infrastructure, and more. For example, more than half of active websites use one or more components of the “LAMP” stack (i.e., the Linux operating system, Apache web server, MySQL database, and PHP scripting language), all of which is open source. Open source software can enter an organization through many different paths.

Within internal development teams, developers may download and embed open source directly into the applications they develop. Third-party software (both as standalone products and as libraries or components embedded in other software) also contains open source. Indeed, software supply chains are growing increasingly complex, as a final product may consist of software from many different vendors, all of which may contain open source. What’s more, open source can find its way into an organization via IT operations, where it may be used extensively as parts of the platforms and tools making up a company’s production environment.

There are many reasons for the popularity of open source, and development organizations’ motivations will vary. But it’s clear that the major motivator is the fact that using existing open source components instead of building new ones from scratch saves developers time and money. So while there are many benefits to the use of open source, there are some potential risks as well – namely, security vulnerabilities that may be present in various open source components. To understand the level of exposure, an organization must be able to map known vulnerabilities to the open source software that it is using, as well as deal with unknown vulnerabilities that exist today when they are discovered at some point in the future.

This raises the question: how does an organization keep track? For the average organization, keeping tabs on what and where open source is in use can be a daunting challenge, especially if relying solely on manual methods.

Further complicating the issue is the fact that most of today’s software is in a constant state of change, with additional code that must be vetted for security vulnerabilities being introduced on a regular basis. In addition, whereas new software releases may be “clean,” vulnerabilities may exist in older versions of software that are still in use within the organization. Keeping track of the various versions of software in use is a challenge in itself, let alone the process of mapping vulnerabilities to all those different software components and versions. The bottom line is that a company’s code base is truly a moving target, and continually evolving security vulnerabilities lurk in places that often can’t be easily tracked by its development team.

Avoiding the next Heartbleed or Shellshock
The Heartbleed vulnerability in the widely-used OpenSSL library offered a stark reminder of the challenges of insufficient visibility into software code bases. While Heartbleed (CVE-2014-0160) received all the notoriety, 24 additional vulnerabilities have been reported against OpenSSL since Heartbleed first came to light. This is understandable – Heartbleed raised awareness and focused more eyes on the problem. The challenge for organizations is how to keep up with all the new versions that are released to address these vulnerabilities. It’s not merely a situation of patch once and forget about it. Rather, organizations must deal with an ongoing stream of patches and new releases, multiplied by all the different places where the software component is used.

Other recent vulnerabilities such as Shellshock, a flaw in the Bash software component, and GHOST, a flaw in the GNU C Library, continue to reinforce the magnitude of the problem. When vulnerabilities like these occur, organizations face simple but critical questions: “Am I exposed by these vulnerabilities, and if so, what applications or systems are exposed?”

Knowledge leads to effective remediation and reduced risk
These events have led to an increased sense of urgency to monitor and manage open source security. Best practices start with having a governance process for the use of open source within an organization. A first priority is often implementing automated solutions that provide visibility into what open source components are in use – both in production software and in software still in development. These tools and processes provide security professionals with the visibility to understand and mitigate vulnerabilities. For example, by scanning code as part of a nightly build process, development teams can understand what open source software components and versions they are using and know whether vulnerabilities are contained within that software. Mitigation can often be as simple as upgrading to a newer version of a component before the application is released.

Since there will never be enough time to address all vulnerabilities, the key is to prioritize, focusing limited resources on the issues presenting the greatest risk. Given that new vulnerabilities will be identified in the future, it’s equally important to continually monitor these new issues against an up-to-date inventory of the open source in use within an organization’s application portfolio.

Armed with this information, security teams, developers, and build teams can make the best-informed decisions about mitigation priorities and plans. Security vulnerabilities are a prevalent issue in software today. It’s more important than ever to implement automated controls that help organizations to quickly mitigate risk as new vulnerabilities are identified, and ideally, before they are exploited.

Gartner predicts that between now and 2020, “security and quality defects publicly attributed to open source software projects will increase significantly, driven by a growing presence within high-profile, mission-critical, and mainstream IT workloads.” A proactive approach to this dilemma saves companies time and expense, as well as significantly reducing operational and security risks.

Bill brings over 30 years of technology and security experience to his role as Chief Technology Officer (CTO) and Executive Vice President of Engineering at Black Duck Software. Previously, Bill was CTO of Verdasys, a leader in information and cyber security. Prior to ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
3/20/2015 | 8:14:04 PM
Static analysis vs. vulnerable components database
Veracode offers a static analysis solution similar to Coverity, but it works on the compiled code and not the source code.  Anyway I'm not sure doing static analysis on your 3rd party libraries/open source components is practical.  You may end up with a much bigger project than you want.  And these tools can have false negatives like already mentioned.  Probably more effective is to use an automated solution that can figure out what components are present in your software and then checks a database to flag known vulnerable components.  OWASP has a tool called Dependency Check that is designed to do this, and there's no cost.  I haven't used it myself yet though.
User Rank: Strategist
3/20/2015 | 4:25:34 AM
Re: What abput Coverity?
The way I learned so much in penetration testing and web development, is from open source. I use Kali and Ubuntu along with the multitude of open source within it. I *love* it!

I don't see reliable evidence that closed source is more vulnerable than open source, or vice versa. In my mind it would have to be examined on a case by case basis on the technologies available. There are many factors involved in analyzing the security level of a package. You see critical vulnerabilities reported in both types of software, frequently. That's not the complete picture in terms of the security. Just because a hole isn't found doesn't mean there isn't a hole there.

Some open source software have strong communities which react quickly when vulnerabilities are reported. Others are slower. Same goes for commercial software. Personally, I'd be concerned about fast patches get applied to reported vulnerabilities If reported vulnerabilities are not patched in a timely manner, I would be suspect of the product, open or closed source.

It seems prudent that a large corporation relying heavily on a particular project would give it a look. I'm sure that in many cases they are already doing that. Also, I think it would be most important to perform periodic and structured security audits on open source security mechanisms such as OpenSSL.
[email protected],
User Rank: Author
3/19/2015 | 9:31:16 PM
Re: What abput Coverity?

I agree, running static code analysis tools such as Coverity are certainly best practice for finding defects and vulnerabilities in both open source and proprietary code. I believe that Coverity even offers a free service for open source projects to scan their code. Unfortunately, not all open source projects do this. In addition, not all vulnerabilities are caught by these tools — many are discovered by security researchers analyzing the code directly. Thus, it still makes sense to have an accounting of what open source software you are using and what known vulnerabilities have been reported against those projects and versions.

Charlie Babcock
Charlie Babcock,
User Rank: Ninja
3/19/2015 | 7:17:03 PM
What abput Coverity?
i don't know what it costs but Coverity will perform a static check on all code sent to it for a fee. It looks for security vulnerability and poor coding practices and gives you a report. This may not be what's needed to maintain your open source code but it seems to me some kind of periodic check by an outside third party would be a good way to go.
What the FedEx Logo Taught Me About Cybersecurity
Matt Shea, Head of Federal @ MixMode,  6/4/2021
A View From Inside a Deception
Sara Peters, Senior Editor at Dark Reading,  6/2/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-06-13
The package studio-42/elfinder before 2.1.58 are vulnerable to Remote Code Execution (RCE) via execution of PHP code in a .phar file. NOTE: This only applies if the server parses .phar files as PHP.
PUBLISHED: 2021-06-12
Receita Federal IRPF 2021 1.7 allows a man-in-the-middle attack against the update feature.
PUBLISHED: 2021-06-12
In Apache PDFBox, a carefully crafted PDF file can trigger an OutOfMemory-Exception while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
PUBLISHED: 2021-06-12
In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
PUBLISHED: 2021-06-12
It was discovered that read_file() in apport/hookutils.py would follow symbolic links or open FIFOs. When this function is used by the openjdk-16 package apport hooks, it could expose private data to other local users.