Perimeter
10/28/2012
05:17 PM
Tom Parker
Tom Parker
Commentary
50%
50%

Supply Chain Woes: Human Error Or Something Else Entirely?

How easy are plausibly deniable bugs really introduced to the supply chain, and are recent fears concerning foreign technologies more hype than fact?

A thought that has lingered in my mind since the industry's annual jaunt to Las Vegas this year has been the idea of plausible deniability when it comes to infiltrating the supply chain and introducing malicious code into software and, in particular, protocol implementations. FX and the_gerg presented some great research relating to vulnerabilities in Huawei equipment. Their talk concluded with a slide suggesting that you get what you pay for.

But do you? Would Huawei equipment be that much more secure at even twice or three times the market price? I've heard a number of reasonable arguments on both sides of the debate -- on one hand, regarding the plausibility of concerns about Huawei and ZTE, and on the other, theories that Huawei and ZTE are being unfairly treated. Supply-chain concerns aren't just for those buying commercially available products developed overseas, but should also be a point of concern for corporations outsourcing development to overseas entities.

Arguments as to whether Huawei is having issues with its development practices aside, the general idea of introducing plausibly deniable vulnerabilities into the supply chain is an interesting subject, which, once you begin to think about it, isn't quite so simple.

At FusionX, we're from time to time asked to simulate the subversion of a client's internal and external supply chain -- often involving the modification of internally developed programs that are subjected to some form of SDL process. First of all, you need to conceive bugs that could both be plausibly introduced through human error, and will pass muster when the code is subjected to security reviews. So, yes, we're not just talking about a simple stack-based strcpy bug that's going to get spotted by humans and source-code analysis products. Chances are if you're going to pull this off, you're going to want as few people as possible knowing about your little skunk works project.

In any large software or hardware development firm, there's a pretty high chance that the code you commit is going to undergo some kind of review. In addition to internal reviews, many large vendors will make at least partial source code available to their biggest partners and customers for numerous reasons (such as product integration). With this in mind, I personally doubt that the bugs discussed in Vegas were put there intentionally. They were were likely the result of poor internal process.

With regard to the bugs themselves, embedded devices, such as routers and switches, infrequently have the native ability to protect themselves against the exploitation of memory corruption-based vulnerabilities (such as the protections that exist in many modern enterprise-operating systems and hardware). Therefore, when exploiting your bug, you're less likely to have to deal with things like memory randomization and canaries; you're most likely going to want to introduce a bug that doesn't involve any kind of memory corruption. For one, the cost of failure when exploiting memory corruption flaws in the embedded world is likely going to give the game away when, in all likelihood, the device locks up or hard resets -- potentially resulting in the loss of configuration data. (We see this all the time in the SCADA world.)

Business logic bugs, on the other hand, are far more attractive because they're (generally speaking) both more difficult to spot and easier to exploit. The drawback here is that if you're introducing a bug in a simple protocol (such as UDP), you're going to have a lot less protocol complexity to hide such a bug within (along with a relatively ridged RFC), as opposed to a more complex and less consistently implemented instrumentation, such as HTTP, for example. A well-thought-through implementation backdoor is also probably going to be multifaceted and involve multiple "quirks" in a given platform architecture, such as leveraging secondary bugs in an architectures security model to achieve an end goal.

Ultimately, the nature of the bug is going to vary based on what you're trying to achieve (device availability, confidentiality, etc.). However, the ease and reliability of exploitation are going to remain a high priority in the design of any implementation backdoor.

In understanding the potential complexity of a well-placed and plausibly deniable bug, it's also hard to see how a security review performed as part of your procurement process can be reasonably expected to spot such a backdoor. The truth is, it can't. Sure, its exploitation might get spotted, but the chances are that by that time it's going to be too late.

This brings us full circle to the concerns recently raised by the House Select Committee on Intelligence. While the committee failed to present a smoking gun, an understanding of how a supply chain influence might be achieved, coupled with a realization of the opportunity for subversion, is going to give any reasonable person the jitters, particularly when it comes to deployments pertaining to national security (and integrity). This equally applies to organizations involved in outsourcing development to overseas contractors in environments where enforcing mature SDL practices and vetting of developers may be next to impossible.

My advice: The easiest way to buy untainted and organic, is to keep it local.

Tom Parker is the CTO of FusionX

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2010-5312
Published: 2014-11-24
Cross-site scripting (XSS) vulnerability in jquery.ui.dialog.js in the Dialog widget in jQuery UI before 1.10.0 allows remote attackers to inject arbitrary web script or HTML via the title option.

CVE-2012-6662
Published: 2014-11-24
Cross-site scripting (XSS) vulnerability in the default content option in jquery.ui.tooltip.js in the Tooltip widget in jQuery UI before 1.10.0 allows remote attackers to inject arbitrary web script or HTML via the title attribute, which is not properly handled in the autocomplete combo box demo.

CVE-2014-1424
Published: 2014-11-24
apparmor_parser in the apparmor package before 2.8.95~2430-0ubuntu5.1 in Ubuntu 14.04 allows attackers to bypass AppArmor policies via unspecified vectors, related to a "miscompilation flaw."

CVE-2014-7817
Published: 2014-11-24
The wordexp function in GNU C Library (aka glibc) 2.21 does not enforce the WRDE_NOCMD flag, which allows context-dependent attackers to execute arbitrary commands, as demonstrated by input containing "$((`...`))".

CVE-2014-7821
Published: 2014-11-24
OpenStack Neutron before 2014.1.4 and 2014.2.x before 2014.2.1 allows remote authenticated users to cause a denial of service (crash) via a crafted dns_nameservers value in the DNS configuration.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?