Risk
4/11/2013
03:22 AM
Connect Directly
Twitter
Twitter
RSS
E-Mail
50%
50%

Taking Steps To Stop Software Sabotage

Enterprise applications pose tempting targets to developers, IT admins, and other insiders with the technical know-how to tamper with code

When most security pros think about application security, the first goal that usually comes to mind is finding and remediating flaws in development and production. But what if the bugs put in place are no accident? What if they're planted there on purpose by someone in the organization who knows where to hide them?

Software sabotage is a real threat -- one with demonstrable criminal case studies accelerating over the past decade. The scripts to these particular screenplays are all different, but the stories are very similar: Developers and IT admins use their positions of power to plant logic bombs to avenge personal grudges, or to tamper with application data to benefit a future employer, or to insert business process errors that could help the malicious insider gain some sort of financial benefit.

It's a "worst-nightmare" scenario, says Chris Weber, co-founder and managing principal of application security consultancy Casaba Security, but it frequently flies under the radar as a risk worth addressing because organizations are still stuck in the weeds with other more workaday application security concerns.

[Is uptime really a good reason to avoid scanning production apps? See Too Scare To Scan.]

"We still see the very basic SQL injection and very basic shopping cart negative number manipulation-type examples on high-risk applications at Fortune 500 companies, ones that are spending a lot of money on application security," says Nish Bhalla, CEO of Security Compass, an information security consulting firm specializing in secure software development. "So if you add another layer of complexity to say, 'Hey, by the way, not only is that a concern, but you should be looking for Easter eggs and other things you have to hunt for,' that's usually not going to go over well."

A pair of recent reports out by Veracode and Cenzic confirms the backlog Bhalla reports. Cenzic reported that 99 percent of the applications it tested in 2012 contained at least one serious bug, and Veracode showed that one-third of Web applications are vulnerable to SQL injection. Nevertheless, in mission-critical applications, a case of software sabotage could have a very material impact on the business. For example, an oil company that depends on mapping software to survey oil sites could lose millions if its telemetry were toyed with to show sites a few miles from their actual locations, says Dan Stickel, CEO of Metaforic, a firm that creates anti-tamper software. Similarly, a hedge fund company dependent on its trading algorithm could make disastrous decisions if it were surreptitiously changed, he says.

Depending on the value of the application to the business, it makes sense to have application security stakeholders do an assessment to start thinking about how attackers could potentially touch the source code and what it would mean for the business.

"It's a good idea to sit down and draw out all the means and the ways source code could be tainted or sabotaged," says Weber, explaining that an attacker could come from a number of different populations. "You've got the malicious developer, and then there's someone internal who is outside of that circle, and then you've got someone who could even be sitting on the Internet and who knows a vulnerability in the CDNs [content delivery networks] who could insert lines of JavaScript."

Though coming up with solutions for the risk of sabotage is a tough nut to crack, organizations must start first with sane internal processes.

"The most practical means [of defense] is to establish a system of checks and balances," he says, "where one person shouldn't be the only person to check in code, or developers shouldn't have control of the audit logs, for example."

During development, organizations can also establish control through organizational requirements, such as pair-level programming and robust peer code review.

"One of the things that a lot of companies do is they pair up programmers so that one person is always looking at the code that another person is writing," Stickel says. "That's actually useful on a lot of different levels. It's useful to try to prevent such sabotage, of course, but it's also useful to catch normal QA problems and make people more creative."

Similarly, "peer code review is a place where at least senior guys might catch some of these bugs," Bhalla says.

In fact, Stickel says, in the famous case in 2009, a fired consultant inserted a logic bomb that would have leveled all 4,000 servers running at Fannie Mae, but it was discovered by a fellow employee who spotted the malicious code before it did any damage.

In addition to code reviews, some risk-averse enterprises could take a page from the mature ISVs that Weber works with, which often have a robust source code management process in place, including the use of code signing.

"Every single code check-in is audited and logged, and developers are actually required to use some sort of signing key so that their name is stamped on that check-in, and there's no way they can actually muddle or modify the logs," he says.

However, as important as checks and balances are in development and QA, it's actually production software that's more at risk of sabotage incidents.

"The development of the code, the signing of it, the installation and all that, that's a blip in time," Stickel says. "The real system that you're trying to protect, typically, is the one that is running in real life, day after day."

It's true, confirms Dawn Cappelli, principal engineer at CERT, who says that the CERT Insider Threat Center finds that insider attacks involving source code usually happen in the maintenance phase.

"Once the code is out there in production, it is stable, nobody is really watching, you can go in there and enhance it, change it, and fix it," she says. "In the original development cycle, you have a lot of code reviews and teams of people who are watching together. So you really need to watch what's happening to your systems, even when they're in production."

Automated scanning of production software can often act as the first line of defense for intentionally inserted malcode, just as it is for finding vulnerable errors, says Stickel.

"A lot of people look at those automatic scanning routines thinking you're just looking for potential exploits that a programmer has put in there by mistake, and that's true -- that's the primary use case," he says. "But that will also find exploits that have been deliberately put inside the program."

Nevertheless, these scanners won't find everything, especially if the inserted problem isn't a vulnerability, per se. That's where companies like Metaforic and Arxan are starting to step in with anti-tamper technology for integrity checking.

However, technology isn't the only important control necessary. Security fundamentals need to be locked up as well. For example, password sharing can pose a huge amount of risk to production source code.

"What we find often times are passwords sitting around in plain text files or some sort of network shares, so theoretically any person on the network in the corporation, even just a vendor who's just working part time or a contractor, could find that, access, and even modify the code in production outside of the whole source code management system," says Weber, who recommends organizations require two-factor authentication for all systems access and code deployments.

That should also be tied together with rigorous segregation of duty policies that rarely, if ever, give developers direct access to production environments, says Bhalla. He also states that this isn't just an IT problem. Organizations need to think carefully about HR screening practices for the IT insiders to ensure that those individuals truly are the trusted employees IT needs them to be.

"We shouldn't be relying just on technology," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
AgtX
50%
50%
AgtX,
User Rank: Apprentice
4/24/2013 | 8:25:07 PM
re: Taking Steps To Stop Software Sabotage
You can trend employees. Rigorous background checks won't happen in many infrastructure key corporations. If an employee behaves suspiciously with security vulnerabilities found in their code, that can be an indicator. If they consistently have security vulnerabilities in code, that can be an indicator. If they had a full blown back door, that can be an indicator.

But indicators are just indicators. Only chance where this indicator could be used is when there can be a counterintelligence investigation. For regular law enforcement situations, this kind of detection is out of the question unless the employee puts in something very obvious or talks about it to a potential witness.

PJS880
50%
50%
PJS880,
User Rank: Ninja
4/23/2013 | 5:38:06 PM
re: Taking Steps To Stop Software Sabotage




Is this even
possible?- Even with a system of checks
and balances there is bind to be vulnerabilities that rogue employees will
exploit. The bottom line is to trust yourself and your companies hiring an background
check measures. There is no viable solution, other than having one administrator
with all the keys, which is unrealistic. It also brings up the question what if
the administrator goes rogue. I agree that rigorous separation of departments
with multiple levels of security imposed would help reduce software sabotage,
but never eliminate it. Keep in mind there is many measure to catch it once it
has happened but to avoid it all together is obviously a challenge.

Paul Sprague

InformationWeek Contributor
-

nannasin28
50%
50%
nannasin28,
User Rank: Apprentice
4/16/2013 | 3:01:22 AM
re: Taking Steps To Stop Software Sabotage
Custom code is used to glue the components together.-- 78M05
marktroester
50%
50%
marktroester,
User Rank: Apprentice
4/12/2013 | 7:20:45 PM
re: Taking Steps To Stop Software Sabotage
Hello Ericka - thanks for the interesting article. The other angle to consider is the move to component based development and agile practices. Modern day development is dominated by components - applications are now constructed from components, many of them open source. Custom code is used to glue the components together. -It is possible that an insider could attempt to malign the application by replacing a trusted component - so it's important that organizations have the ability to maintain the integrity of the components that they use. This should be done as part of a larger component management strategy.

Mark Troester
http://www.sonatype.com/people...
Twitter:-@mtroester:twitter-
AgtX
50%
50%
AgtX,
User Rank: Apprentice
4/11/2013 | 1:45:38 PM
re: Taking Steps To Stop Software Sabotage
A nation based or very smart inside attacker would likely simply inject a security vulnerability which is difficult to find and even if caught would be considered to be an accidental vulnerability. That does not take much logic to figure out and the tactic is strong for them.

For web applications and the like they would also want a vulnerability which would plausibly be found by an outsider. It is routine to rate any vulnerabilities found using the factor of ease of finding as part of the threat ratio.

Proving a developer did this is next to impossible, however.

I do not think company background checks would discover someone in a dual employ of a foreign nation. And that is the most likely suspect. Because this is one of the most effective ways for nations to steal information from other nations.
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
5 Security Technologies to Watch in 2017
Emerging tools and services promise to make a difference this year. Are they on your company's list?
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-7445
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

CVE-2015-4948
Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

CVE-2015-5660
Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

CVE-2015-6003
Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

CVE-2015-6333
Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.