Taking Steps To Stop Software Sabotage

Enterprise applications pose tempting targets to developers, IT admins, and other insiders with the technical know-how to tamper with code

Dark Reading logo in a gray background | Dark Reading

When most security pros think about application security, the first goal that usually comes to mind is finding and remediating flaws in development and production. But what if the bugs put in place are no accident? What if they're planted there on purpose by someone in the organization who knows where to hide them?

Software sabotage is a real threat -- one with demonstrable criminal case studies accelerating over the past decade. The scripts to these particular screenplays are all different, but the stories are very similar: Developers and IT admins use their positions of power to plant logic bombs to avenge personal grudges, or to tamper with application data to benefit a future employer, or to insert business process errors that could help the malicious insider gain some sort of financial benefit.

It's a "worst-nightmare" scenario, says Chris Weber, co-founder and managing principal of application security consultancy Casaba Security, but it frequently flies under the radar as a risk worth addressing because organizations are still stuck in the weeds with other more workaday application security concerns.

[Is uptime really a good reason to avoid scanning production apps? See Too Scare To Scan.]

"We still see the very basic SQL injection and very basic shopping cart negative number manipulation-type examples on high-risk applications at Fortune 500 companies, ones that are spending a lot of money on application security," says Nish Bhalla, CEO of Security Compass, an information security consulting firm specializing in secure software development. "So if you add another layer of complexity to say, 'Hey, by the way, not only is that a concern, but you should be looking for Easter eggs and other things you have to hunt for,' that's usually not going to go over well."

A pair of recent reports out by Veracode and Cenzic confirms the backlog Bhalla reports. Cenzic reported that 99 percent of the applications it tested in 2012 contained at least one serious bug, and Veracode showed that one-third of Web applications are vulnerable to SQL injection. Nevertheless, in mission-critical applications, a case of software sabotage could have a very material impact on the business. For example, an oil company that depends on mapping software to survey oil sites could lose millions if its telemetry were toyed with to show sites a few miles from their actual locations, says Dan Stickel, CEO of Metaforic, a firm that creates anti-tamper software. Similarly, a hedge fund company dependent on its trading algorithm could make disastrous decisions if it were surreptitiously changed, he says.

Depending on the value of the application to the business, it makes sense to have application security stakeholders do an assessment to start thinking about how attackers could potentially touch the source code and what it would mean for the business.

"It's a good idea to sit down and draw out all the means and the ways source code could be tainted or sabotaged," says Weber, explaining that an attacker could come from a number of different populations. "You've got the malicious developer, and then there's someone internal who is outside of that circle, and then you've got someone who could even be sitting on the Internet and who knows a vulnerability in the CDNs [content delivery networks] who could insert lines of JavaScript."

Though coming up with solutions for the risk of sabotage is a tough nut to crack, organizations must start first with sane internal processes.

"The most practical means [of defense] is to establish a system of checks and balances," he says, "where one person shouldn't be the only person to check in code, or developers shouldn't have control of the audit logs, for example."

During development, organizations can also establish control through organizational requirements, such as pair-level programming and robust peer code review.

"One of the things that a lot of companies do is they pair up programmers so that one person is always looking at the code that another person is writing," Stickel says. "That's actually useful on a lot of different levels. It's useful to try to prevent such sabotage, of course, but it's also useful to catch normal QA problems and make people more creative."

Similarly, "peer code review is a place where at least senior guys might catch some of these bugs," Bhalla says.

In fact, Stickel says, in the famous case in 2009, a fired consultant inserted a logic bomb that would have leveled all 4,000 servers running at Fannie Mae, but it was discovered by a fellow employee who spotted the malicious code before it did any damage.

In addition to code reviews, some risk-averse enterprises could take a page from the mature ISVs that Weber works with, which often have a robust source code management process in place, including the use of code signing.

"Every single code check-in is audited and logged, and developers are actually required to use some sort of signing key so that their name is stamped on that check-in, and there's no way they can actually muddle or modify the logs," he says.

However, as important as checks and balances are in development and QA, it's actually production software that's more at risk of sabotage incidents.

"The development of the code, the signing of it, the installation and all that, that's a blip in time," Stickel says. "The real system that you're trying to protect, typically, is the one that is running in real life, day after day."

It's true, confirms Dawn Cappelli, principal engineer at CERT, who says that the CERT Insider Threat Center finds that insider attacks involving source code usually happen in the maintenance phase.

"Once the code is out there in production, it is stable, nobody is really watching, you can go in there and enhance it, change it, and fix it," she says. "In the original development cycle, you have a lot of code reviews and teams of people who are watching together. So you really need to watch what's happening to your systems, even when they're in production."

Automated scanning of production software can often act as the first line of defense for intentionally inserted malcode, just as it is for finding vulnerable errors, says Stickel.

"A lot of people look at those automatic scanning routines thinking you're just looking for potential exploits that a programmer has put in there by mistake, and that's true -- that's the primary use case," he says. "But that will also find exploits that have been deliberately put inside the program."

Nevertheless, these scanners won't find everything, especially if the inserted problem isn't a vulnerability, per se. That's where companies like Metaforic and Arxan are starting to step in with anti-tamper technology for integrity checking.

However, technology isn't the only important control necessary. Security fundamentals need to be locked up as well. For example, password sharing can pose a huge amount of risk to production source code.

"What we find often times are passwords sitting around in plain text files or some sort of network shares, so theoretically any person on the network in the corporation, even just a vendor who's just working part time or a contractor, could find that, access, and even modify the code in production outside of the whole source code management system," says Weber, who recommends organizations require two-factor authentication for all systems access and code deployments.

That should also be tied together with rigorous segregation of duty policies that rarely, if ever, give developers direct access to production environments, says Bhalla. He also states that this isn't just an IT problem. Organizations need to think carefully about HR screening practices for the IT insiders to ensure that those individuals truly are the trusted employees IT needs them to be.

"We shouldn't be relying just on technology," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message

About the Author

Ericka Chickowski, Contributing Writer

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights