Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


03:22 AM
Connect Directly

Taking Steps To Stop Software Sabotage

Enterprise applications pose tempting targets to developers, IT admins, and other insiders with the technical know-how to tamper with code

When most security pros think about application security, the first goal that usually comes to mind is finding and remediating flaws in development and production. But what if the bugs put in place are no accident? What if they're planted there on purpose by someone in the organization who knows where to hide them?

Software sabotage is a real threat -- one with demonstrable criminal case studies accelerating over the past decade. The scripts to these particular screenplays are all different, but the stories are very similar: Developers and IT admins use their positions of power to plant logic bombs to avenge personal grudges, or to tamper with application data to benefit a future employer, or to insert business process errors that could help the malicious insider gain some sort of financial benefit.

It's a "worst-nightmare" scenario, says Chris Weber, co-founder and managing principal of application security consultancy Casaba Security, but it frequently flies under the radar as a risk worth addressing because organizations are still stuck in the weeds with other more workaday application security concerns.

[Is uptime really a good reason to avoid scanning production apps? See Too Scare To Scan.]

"We still see the very basic SQL injection and very basic shopping cart negative number manipulation-type examples on high-risk applications at Fortune 500 companies, ones that are spending a lot of money on application security," says Nish Bhalla, CEO of Security Compass, an information security consulting firm specializing in secure software development. "So if you add another layer of complexity to say, 'Hey, by the way, not only is that a concern, but you should be looking for Easter eggs and other things you have to hunt for,' that's usually not going to go over well."

A pair of recent reports out by Veracode and Cenzic confirms the backlog Bhalla reports. Cenzic reported that 99 percent of the applications it tested in 2012 contained at least one serious bug, and Veracode showed that one-third of Web applications are vulnerable to SQL injection. Nevertheless, in mission-critical applications, a case of software sabotage could have a very material impact on the business. For example, an oil company that depends on mapping software to survey oil sites could lose millions if its telemetry were toyed with to show sites a few miles from their actual locations, says Dan Stickel, CEO of Metaforic, a firm that creates anti-tamper software. Similarly, a hedge fund company dependent on its trading algorithm could make disastrous decisions if it were surreptitiously changed, he says.

Depending on the value of the application to the business, it makes sense to have application security stakeholders do an assessment to start thinking about how attackers could potentially touch the source code and what it would mean for the business.

"It's a good idea to sit down and draw out all the means and the ways source code could be tainted or sabotaged," says Weber, explaining that an attacker could come from a number of different populations. "You've got the malicious developer, and then there's someone internal who is outside of that circle, and then you've got someone who could even be sitting on the Internet and who knows a vulnerability in the CDNs [content delivery networks] who could insert lines of JavaScript."

Though coming up with solutions for the risk of sabotage is a tough nut to crack, organizations must start first with sane internal processes.

"The most practical means [of defense] is to establish a system of checks and balances," he says, "where one person shouldn't be the only person to check in code, or developers shouldn't have control of the audit logs, for example."

During development, organizations can also establish control through organizational requirements, such as pair-level programming and robust peer code review.

"One of the things that a lot of companies do is they pair up programmers so that one person is always looking at the code that another person is writing," Stickel says. "That's actually useful on a lot of different levels. It's useful to try to prevent such sabotage, of course, but it's also useful to catch normal QA problems and make people more creative."

Similarly, "peer code review is a place where at least senior guys might catch some of these bugs," Bhalla says.

In fact, Stickel says, in the famous case in 2009, a fired consultant inserted a logic bomb that would have leveled all 4,000 servers running at Fannie Mae, but it was discovered by a fellow employee who spotted the malicious code before it did any damage.

In addition to code reviews, some risk-averse enterprises could take a page from the mature ISVs that Weber works with, which often have a robust source code management process in place, including the use of code signing.

"Every single code check-in is audited and logged, and developers are actually required to use some sort of signing key so that their name is stamped on that check-in, and there's no way they can actually muddle or modify the logs," he says.

However, as important as checks and balances are in development and QA, it's actually production software that's more at risk of sabotage incidents.

"The development of the code, the signing of it, the installation and all that, that's a blip in time," Stickel says. "The real system that you're trying to protect, typically, is the one that is running in real life, day after day."

It's true, confirms Dawn Cappelli, principal engineer at CERT, who says that the CERT Insider Threat Center finds that insider attacks involving source code usually happen in the maintenance phase.

"Once the code is out there in production, it is stable, nobody is really watching, you can go in there and enhance it, change it, and fix it," she says. "In the original development cycle, you have a lot of code reviews and teams of people who are watching together. So you really need to watch what's happening to your systems, even when they're in production."

Automated scanning of production software can often act as the first line of defense for intentionally inserted malcode, just as it is for finding vulnerable errors, says Stickel.

"A lot of people look at those automatic scanning routines thinking you're just looking for potential exploits that a programmer has put in there by mistake, and that's true -- that's the primary use case," he says. "But that will also find exploits that have been deliberately put inside the program."

Nevertheless, these scanners won't find everything, especially if the inserted problem isn't a vulnerability, per se. That's where companies like Metaforic and Arxan are starting to step in with anti-tamper technology for integrity checking.

However, technology isn't the only important control necessary. Security fundamentals need to be locked up as well. For example, password sharing can pose a huge amount of risk to production source code.

"What we find often times are passwords sitting around in plain text files or some sort of network shares, so theoretically any person on the network in the corporation, even just a vendor who's just working part time or a contractor, could find that, access, and even modify the code in production outside of the whole source code management system," says Weber, who recommends organizations require two-factor authentication for all systems access and code deployments.

That should also be tied together with rigorous segregation of duty policies that rarely, if ever, give developers direct access to production environments, says Bhalla. He also states that this isn't just an IT problem. Organizations need to think carefully about HR screening practices for the IT insiders to ensure that those individuals truly are the trusted employees IT needs them to be.

"We shouldn't be relying just on technology," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
4/24/2013 | 8:25:07 PM
re: Taking Steps To Stop Software Sabotage
You can trend employees. Rigorous background checks won't happen in many infrastructure key corporations. If an employee behaves suspiciously with security vulnerabilities found in their code, that can be an indicator. If they consistently have security vulnerabilities in code, that can be an indicator. If they had a full blown back door, that can be an indicator.

But indicators are just indicators. Only chance where this indicator could be used is when there can be a counterintelligence investigation. For regular law enforcement situations, this kind of detection is out of the question unless the employee puts in something very obvious or talks about it to a potential witness.

User Rank: Ninja
4/23/2013 | 5:38:06 PM
re: Taking Steps To Stop Software Sabotage

Is this even
possible?- Even with a system of checks
and balances there is bind to be vulnerabilities that rogue employees will
exploit. The bottom line is to trust yourself and your companies hiring an background
check measures. There is no viable solution, other than having one administrator
with all the keys, which is unrealistic. It also brings up the question what if
the administrator goes rogue. I agree that rigorous separation of departments
with multiple levels of security imposed would help reduce software sabotage,
but never eliminate it. Keep in mind there is many measure to catch it once it
has happened but to avoid it all together is obviously a challenge.

Paul Sprague

InformationWeek Contributor

User Rank: Apprentice
4/16/2013 | 3:01:22 AM
re: Taking Steps To Stop Software Sabotage
Custom code is used to glue the components together.-- 78M05
User Rank: Apprentice
4/12/2013 | 7:20:45 PM
re: Taking Steps To Stop Software Sabotage
Hello Ericka - thanks for the interesting article. The other angle to consider is the move to component based development and agile practices. Modern day development is dominated by components - applications are now constructed from components, many of them open source. Custom code is used to glue the components together. -It is possible that an insider could attempt to malign the application by replacing a trusted component - so it's important that organizations have the ability to maintain the integrity of the components that they use. This should be done as part of a larger component management strategy.

Mark Troester
User Rank: Apprentice
4/11/2013 | 1:45:38 PM
re: Taking Steps To Stop Software Sabotage
A nation based or very smart inside attacker would likely simply inject a security vulnerability which is difficult to find and even if caught would be considered to be an accidental vulnerability. That does not take much logic to figure out and the tactic is strong for them.

For web applications and the like they would also want a vulnerability which would plausibly be found by an outsider. It is routine to rate any vulnerabilities found using the factor of ease of finding as part of the threat ratio.

Proving a developer did this is next to impossible, however.

I do not think company background checks would discover someone in a dual employ of a foreign nation. And that is the most likely suspect. Because this is one of the most effective ways for nations to steal information from other nations.
Zero-Factor Authentication: Owning Our Data
Nick Selby, Chief Security Officer at Paxos Trust Company,  2/19/2020
44% of Security Threats Start in the Cloud
Kelly Sheridan, Staff Editor, Dark Reading,  2/19/2020
Ransomware Damage Hit $11.5B in 2019
Dark Reading Staff 2/20/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-02-21
btif/src/btif_dm.c in Android before 5.1 does not properly enforce the temporary nature of a Bluetooth pairing, which allows user-assisted remote attackers to bypass intended access restrictions via crafted Bluetooth packets after the tapping of a crafted NFC tag.
PUBLISHED: 2020-02-21
Curl before 7.49.1 in Apple OS X before macOS Sierra prior to 10.12 allows remote or local attackers to execute arbitrary code, gain sensitive information, cause denial-of-service conditions, bypass security restrictions, and perform unauthorized actions. This may aid in other attacks.
PUBLISHED: 2020-02-21
uap-core before 0.7.3 is vulnerable to a denial of service attack when processing crafted User-Agent strings. Some regexes are vulnerable to regular expression denial of service (REDoS) due to overlapping capture groups. This allows remote attackers to overload a server by setting the User-Agent hea...
PUBLISHED: 2020-02-20
Trend Micro has repackaged installers for several Trend Micro products that were found to utilize a version of an install package that had a DLL hijack vulnerability that could be exploited during a new product installation. The vulnerability was found to ONLY be exploitable during an initial produc...
PUBLISHED: 2020-02-20
The Trend Micro Security 2019 ( and below) consumer family of products is vulnerable to a denial of service (DoS) attack in which a malicious actor could manipulate a key file at a certain time during the system startup process to disable the product's malware protection functions or the ...