Are your software developers sabotaging your company's application code? How do you know?

Tim Wilson, Editor in Chief, Dark Reading, Contributor

November 5, 2007

4 Min Read

ARLINGTON, Va. -- Computer Security Institute 2007 -- What if a software developer wants to put a security flaw in your enterprise's applications?

That was the foreboding question posed by Dawn Cappelli and Andrew Moore, two experts from CERT and the Software Engineering Institute at Carnegie Mellon University, in a session held at the CSI show here today. The answers had some security pros in attendance worried.

"This is a little scary," said one attendee, who asked to remain anonymous. "This could happen to us."

Most enterprises trust their developers, and they generally assume that security flaws found in enterprise software are the result of accidents, oversights, or sloppy coding. But enterprises should also be watching for that small, dangerous fraction of developers who create backdoors or other exploits that might let them steal or damage data later on, according to the CERT experts. (See Security's New School.)

CERT, which has been doing research on insider threats for several years in conjunction with the U.S. Secret Service, found one company that lost $691 million over a five-year period through modified source code introduced by an employee in applications development.

"You wonder, how could a company lose that much money over such a long period of time and not catch it," said Cappelli. "But he was in a position where he could not only reroute the funds, but he could also change the reports."

Companies should ask themselves whether their policies, tools, and processes might make them vulnerable to such insider attacks, Cappelli said. Given the right circumstances, a seemingly harmless developer could leave openings for theft, plant a logic bomb to destroy information, or wipe out backup data that is crucial to the company, she observed.

"In the research we've done with the Secret Service, we've seen more logic bombs and malicious code than we expected," Cappelli said. "Some of them aren't terribly destructive at first and some of them don't go off right away. We saw one developer put malicious code in software that didn't begin to operate until a year after he put it in."

Cappelli and Moore took attendees through a range of scenarios in which a developer might have the leeway to intentionally create security vulnerabilities. Shared passwords, insufficient separation of duties, and a lack of adequate access controls are among the environmental weaknesses that might tempt a greedy or disgruntled worker, they said.

For example, some IT organizations might allow the same individual who manages administrative passwords to also gain access to code, the experts said. A developer who has the ability to make changes in software and the means to cover his tracks might be extremely dangerous to the corporation, they said.

Some of these opportunities for malicious coding can be eliminated through more secure development tools and practices, Cappelli said. But it's also important to implement managerial processes that help limit developers' reach and detect warning signs that a programmer might be about to go bad, she said.

"In our work with the Secret Service, we've used psychologists to study the behavior of the individuals who committed these types of attacks," she said. "There are a lot of non-technology indicators that you need to watch for as well."

Most developers who commit sabotage do so because they are upset or angry with the company, Cappelli observed. Thieves, by contrast, may do a better job of hiding their behavior -- and their tracks.

"The people around them can often see it coming before something happens," Cappelli says. Companies should work to create a culture in which individuals feel a responsibility to report suspicious activity, especially among the development team, she says.

The insider threat continues to grow, mostly without notice in the industry, Cappelli says. When CERT and the Secret Service did their first study three years ago, 39 percent of enterprises reported having experienced an insider incident. Last year, that figure was up to 55 percent.

"In most cases, this sort of threat is handled internally," said Cappelli. "Of the enterprises we found that had experienced an insider attack, 74 percent of them never reported the incident to law enforcement."

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Tim Wilson, Editor in Chief, Dark Reading

Contributor

Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech's online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one of the top cyber security journalists in the US in voting among his peers, conducted by the SANS Institute. In 2011 he was named one of the 50 Most Powerful Voices in Security by SYS-CON Media.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights