Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

3/19/2015
10:30 AM
Bill Ledingham
Bill Ledingham
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Risky Business: Why Monitoring Vulnerability Data Is Never Enough

Keeping tabs on open source code used in your organization's applications and infrastructure is daunting, especially if you are relying solely on manual methods.

Open source software has fueled the rise of the cloud, big data, and mobile applications. The likes of Google, Facebook, and Amazon run predominantly on open source, and this evolution is also playing out across the rest of the enterprise. Gartner estimates that by 2016, open source will be included in mission-critical applications within 99 percent of Global 2000 companies.

How can organizations manage the security implications of this rising influx of open source code? It’s a complex issue. There are many online sources of information about security vulnerabilities and, clearly, these resources are extremely important because the threat landscape is constantly in flux. It might be tempting for organizations to attempt to manage code security simply by monitoring these sources. However, the reality is more complicated. This approach can leave organizations, their critical data, and their products vulnerable to security threats.

Understanding your code case: a problem of scope
For most companies, open source is pervasive. Open source code exists in customer-facing applications, operating systems, infrastructure, and more. For example, more than half of active websites use one or more components of the “LAMP” stack (i.e., the Linux operating system, Apache web server, MySQL database, and PHP scripting language), all of which is open source. Open source software can enter an organization through many different paths.

Within internal development teams, developers may download and embed open source directly into the applications they develop. Third-party software (both as standalone products and as libraries or components embedded in other software) also contains open source. Indeed, software supply chains are growing increasingly complex, as a final product may consist of software from many different vendors, all of which may contain open source. What’s more, open source can find its way into an organization via IT operations, where it may be used extensively as parts of the platforms and tools making up a company’s production environment.

There are many reasons for the popularity of open source, and development organizations’ motivations will vary. But it’s clear that the major motivator is the fact that using existing open source components instead of building new ones from scratch saves developers time and money. So while there are many benefits to the use of open source, there are some potential risks as well – namely, security vulnerabilities that may be present in various open source components. To understand the level of exposure, an organization must be able to map known vulnerabilities to the open source software that it is using, as well as deal with unknown vulnerabilities that exist today when they are discovered at some point in the future.

This raises the question: how does an organization keep track? For the average organization, keeping tabs on what and where open source is in use can be a daunting challenge, especially if relying solely on manual methods.

Further complicating the issue is the fact that most of today’s software is in a constant state of change, with additional code that must be vetted for security vulnerabilities being introduced on a regular basis. In addition, whereas new software releases may be “clean,” vulnerabilities may exist in older versions of software that are still in use within the organization. Keeping track of the various versions of software in use is a challenge in itself, let alone the process of mapping vulnerabilities to all those different software components and versions. The bottom line is that a company’s code base is truly a moving target, and continually evolving security vulnerabilities lurk in places that often can’t be easily tracked by its development team.

Avoiding the next Heartbleed or Shellshock
The Heartbleed vulnerability in the widely-used OpenSSL library offered a stark reminder of the challenges of insufficient visibility into software code bases. While Heartbleed (CVE-2014-0160) received all the notoriety, 24 additional vulnerabilities have been reported against OpenSSL since Heartbleed first came to light. This is understandable – Heartbleed raised awareness and focused more eyes on the problem. The challenge for organizations is how to keep up with all the new versions that are released to address these vulnerabilities. It’s not merely a situation of patch once and forget about it. Rather, organizations must deal with an ongoing stream of patches and new releases, multiplied by all the different places where the software component is used.

Other recent vulnerabilities such as Shellshock, a flaw in the Bash software component, and GHOST, a flaw in the GNU C Library, continue to reinforce the magnitude of the problem. When vulnerabilities like these occur, organizations face simple but critical questions: “Am I exposed by these vulnerabilities, and if so, what applications or systems are exposed?”

Knowledge leads to effective remediation and reduced risk
These events have led to an increased sense of urgency to monitor and manage open source security. Best practices start with having a governance process for the use of open source within an organization. A first priority is often implementing automated solutions that provide visibility into what open source components are in use – both in production software and in software still in development. These tools and processes provide security professionals with the visibility to understand and mitigate vulnerabilities. For example, by scanning code as part of a nightly build process, development teams can understand what open source software components and versions they are using and know whether vulnerabilities are contained within that software. Mitigation can often be as simple as upgrading to a newer version of a component before the application is released.

Since there will never be enough time to address all vulnerabilities, the key is to prioritize, focusing limited resources on the issues presenting the greatest risk. Given that new vulnerabilities will be identified in the future, it’s equally important to continually monitor these new issues against an up-to-date inventory of the open source in use within an organization’s application portfolio.

Armed with this information, security teams, developers, and build teams can make the best-informed decisions about mitigation priorities and plans. Security vulnerabilities are a prevalent issue in software today. It’s more important than ever to implement automated controls that help organizations to quickly mitigate risk as new vulnerabilities are identified, and ideally, before they are exploited.

Gartner predicts that between now and 2020, “security and quality defects publicly attributed to open source software projects will increase significantly, driven by a growing presence within high-profile, mission-critical, and mainstream IT workloads.” A proactive approach to this dilemma saves companies time and expense, as well as significantly reducing operational and security risks.

Bill brings over 30 years of technology and security experience to his role as Chief Technology Officer (CTO) and Executive Vice President of Engineering at Black Duck Software. Previously, Bill was CTO of Verdasys, a leader in information and cyber security. Prior to ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
dferguson_usa
50%
50%
dferguson_usa,
User Rank: Apprentice
3/20/2015 | 8:14:04 PM
Static analysis vs. vulnerable components database
Veracode offers a static analysis solution similar to Coverity, but it works on the compiled code and not the source code.  Anyway I'm not sure doing static analysis on your 3rd party libraries/open source components is practical.  You may end up with a much bigger project than you want.  And these tools can have false negatives like already mentioned.  Probably more effective is to use an automated solution that can figure out what components are present in your software and then checks a database to flag known vulnerable components.  OWASP has a tool called Dependency Check that is designed to do this, and there's no cost.  I haven't used it myself yet though.
xmarksthespot
50%
50%
xmarksthespot,
User Rank: Strategist
3/20/2015 | 4:25:34 AM
Re: What abput Coverity?
The way I learned so much in penetration testing and web development, is from open source. I use Kali and Ubuntu along with the multitude of open source within it. I *love* it!

I don't see reliable evidence that closed source is more vulnerable than open source, or vice versa. In my mind it would have to be examined on a case by case basis on the technologies available. There are many factors involved in analyzing the security level of a package. You see critical vulnerabilities reported in both types of software, frequently. That's not the complete picture in terms of the security. Just because a hole isn't found doesn't mean there isn't a hole there.

Some open source software have strong communities which react quickly when vulnerabilities are reported. Others are slower. Same goes for commercial software. Personally, I'd be concerned about fast patches get applied to reported vulnerabilities If reported vulnerabilities are not patched in a timely manner, I would be suspect of the product, open or closed source.

It seems prudent that a large corporation relying heavily on a particular project would give it a look. I'm sure that in many cases they are already doing that. Also, I think it would be most important to perform periodic and structured security audits on open source security mechanisms such as OpenSSL.
bill@blackduck
50%
50%
[email protected],
User Rank: Author
3/19/2015 | 9:31:16 PM
Re: What abput Coverity?

I agree, running static code analysis tools such as Coverity are certainly best practice for finding defects and vulnerabilities in both open source and proprietary code. I believe that Coverity even offers a free service for open source projects to scan their code. Unfortunately, not all open source projects do this. In addition, not all vulnerabilities are caught by these tools — many are discovered by security researchers analyzing the code directly. Thus, it still makes sense to have an accounting of what open source software you are using and what known vulnerabilities have been reported against those projects and versions.

Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Ninja
3/19/2015 | 7:17:03 PM
What abput Coverity?
i don't know what it costs but Coverity will perform a static check on all code sent to it for a fee. It looks for security vulnerability and poor coding practices and gives you a report. This may not be what's needed to maintain your open source code but it seems to me some kind of periodic check by an outside third party would be a good way to go.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 10/1/2020
9 Tips to Prepare for the Future of Cloud & Network Security
Kelly Sheridan, Staff Editor, Dark Reading,  9/28/2020
Attacker Dwell Time: Ransomware's Most Important Metric
Ricardo Villadiego, Founder and CEO of Lumu,  9/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-24860
PUBLISHED: 2020-10-01
CMS Made Simple 2.2.14 allows an authenticated user with access to the Content Manager to edit content and put persistent XSS payload in the affected text fields. The user can get cookies from every authenticated user who visits the website.
CVE-2020-24861
PUBLISHED: 2020-10-01
GetSimple CMS 3.3.16 allows in parameter 'permalink' on the Settings page persistent Cross Site Scripting which is executed when you create and open a new page
CVE-2020-25990
PUBLISHED: 2020-10-01
WebsiteBaker 2.12.2 allows SQL Injection via parameter 'display_name' in /websitebaker/admin/preferences/save.php. Exploiting this issue could allow an attacker to compromise the application, access or modify data, or exploit latent vulnerabilities in the underlying database.
CVE-2020-8109
PUBLISHED: 2020-10-01
A vulnerability has been discovered in the ace.xmd parser that results from a lack of proper validation of user-supplied data, which can result in a write past the end of an allocated buffer. This can result in denial-of-service. This issue affects: Bitdefender Engines version 7.84892 and prior vers...
CVE-2019-20902
PUBLISHED: 2020-10-01
Upgrading Crowd via XML Data Transfer can reactivate a disabled user from OpenLDAP. The affected versions are from before version 3.4.6 and from 3.5.0 before 3.5.1.