Yet in many cases, the risk is not in code that internal developers have written, but in components provided by outside developers, whether open-source libraries or third-party toolkits. Take the open-source renderer WebKit: While many companies know that browsers, such as Apple's Safari or Google's Chrome, rely on WebKit, so do a number of other applications, such as Entourage 2008, Yahoo! Messenger, and Macromedia's Contribute 3.
Companies that rely on third parties for code place the security in the hands of other developers, says Barmak Meftah, vice president of enterprise security products for Hewlett-Packard.
"Companies don't typically think of the risk that third-party software opens them to and the inherent use of open-source software as part of their stack," Meftah says."There is an assumption that vendors and open-source developers have gone through the security checkpoints during the application development process, and that assumption is false."
To secure their software, companies must first figure out which code components have become part of their code base. The first step is to take a census of all the code used for development, says HD Moore, chief security officer with vulnerability management firm Rapid7 and chief architect of Metasploit.
"Even development teams that are pretty well-versed in what they are doing and know what their product looks like may not be aware of what back-end libraries have been used, and that is what you cover during a code audit," Moore says.
As a prelude to any code audit, companies must verify that they are able to assess a software library, whether open-source or close-source. If the libraries are supplied by a third-party developer, then companies must focus on contract language, including the first step of getting permission to analyze the software, HP's Meftah says.
"We are seeing an increasing trend in having contracting clauses that allow the end users to do some analysis on the software," he says.
Once companies have established their rights to analyze the software, the developers and IT security teams need to do an application assessment and find the vulnerabilities in the software, whether through static analysis, by monitoring the developer's support forum, or through an intelligence service that tracks changes to software.
The company can then make an informed decision to patch the software or, if a patch is not practical, use a runtime analysis product to harden the application against exploitation of any critical vulnerabilities.
"Any anomalous activity that goes through that component can be tracked," Meftah says. "You could live with the vulnerable piece of software as long as you have hardened it."
One problem for many companies is that the source of a software library might not be the sole supplier, Veracode's Wysopal says.
"Every third party you are dealing with may have third parties that they're dealing with as well," he says. "So someone who is doing their due diligence should go to their third-party supplier and ask what are they doing for application security."
This so-called nested third-party supply chain problem can hide the actual source of software and make fixing vulnerabilities more difficult, he says.
Even after chasing down the code components on which a certain library might depend, companies still have the problematic task of verifying that patches do not break the applications. The process is an arduous one, but necessary for companies that use third-party components, Rapid7's Moore says.
"You can definitely go on a rabbit hole chasing these down," he says.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.