Perimeter
10/28/2012
05:17 PM
Tom Parker
Tom Parker
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Supply Chain Woes: Human Error Or Something Else Entirely?

How easy are plausibly deniable bugs really introduced to the supply chain, and are recent fears concerning foreign technologies more hype than fact?

A thought that has lingered in my mind since the industry's annual jaunt to Las Vegas this year has been the idea of plausible deniability when it comes to infiltrating the supply chain and introducing malicious code into software and, in particular, protocol implementations. FX and the_gerg presented some great research relating to vulnerabilities in Huawei equipment. Their talk concluded with a slide suggesting that you get what you pay for.

But do you? Would Huawei equipment be that much more secure at even twice or three times the market price? I've heard a number of reasonable arguments on both sides of the debate -- on one hand, regarding the plausibility of concerns about Huawei and ZTE, and on the other, theories that Huawei and ZTE are being unfairly treated. Supply-chain concerns aren't just for those buying commercially available products developed overseas, but should also be a point of concern for corporations outsourcing development to overseas entities.

Arguments as to whether Huawei is having issues with its development practices aside, the general idea of introducing plausibly deniable vulnerabilities into the supply chain is an interesting subject, which, once you begin to think about it, isn't quite so simple.

At FusionX, we're from time to time asked to simulate the subversion of a client's internal and external supply chain -- often involving the modification of internally developed programs that are subjected to some form of SDL process. First of all, you need to conceive bugs that could both be plausibly introduced through human error, and will pass muster when the code is subjected to security reviews. So, yes, we're not just talking about a simple stack-based strcpy bug that's going to get spotted by humans and source-code analysis products. Chances are if you're going to pull this off, you're going to want as few people as possible knowing about your little skunk works project.

In any large software or hardware development firm, there's a pretty high chance that the code you commit is going to undergo some kind of review. In addition to internal reviews, many large vendors will make at least partial source code available to their biggest partners and customers for numerous reasons (such as product integration). With this in mind, I personally doubt that the bugs discussed in Vegas were put there intentionally. They were were likely the result of poor internal process.

With regard to the bugs themselves, embedded devices, such as routers and switches, infrequently have the native ability to protect themselves against the exploitation of memory corruption-based vulnerabilities (such as the protections that exist in many modern enterprise-operating systems and hardware). Therefore, when exploiting your bug, you're less likely to have to deal with things like memory randomization and canaries; you're most likely going to want to introduce a bug that doesn't involve any kind of memory corruption. For one, the cost of failure when exploiting memory corruption flaws in the embedded world is likely going to give the game away when, in all likelihood, the device locks up or hard resets -- potentially resulting in the loss of configuration data. (We see this all the time in the SCADA world.)

Business logic bugs, on the other hand, are far more attractive because they're (generally speaking) both more difficult to spot and easier to exploit. The drawback here is that if you're introducing a bug in a simple protocol (such as UDP), you're going to have a lot less protocol complexity to hide such a bug within (along with a relatively ridged RFC), as opposed to a more complex and less consistently implemented instrumentation, such as HTTP, for example. A well-thought-through implementation backdoor is also probably going to be multifaceted and involve multiple "quirks" in a given platform architecture, such as leveraging secondary bugs in an architectures security model to achieve an end goal.

Ultimately, the nature of the bug is going to vary based on what you're trying to achieve (device availability, confidentiality, etc.). However, the ease and reliability of exploitation are going to remain a high priority in the design of any implementation backdoor.

In understanding the potential complexity of a well-placed and plausibly deniable bug, it's also hard to see how a security review performed as part of your procurement process can be reasonably expected to spot such a backdoor. The truth is, it can't. Sure, its exploitation might get spotted, but the chances are that by that time it's going to be too late.

This brings us full circle to the concerns recently raised by the House Select Committee on Intelligence. While the committee failed to present a smoking gun, an understanding of how a supply chain influence might be achieved, coupled with a realization of the opportunity for subversion, is going to give any reasonable person the jitters, particularly when it comes to deployments pertaining to national security (and integrity). This equally applies to organizations involved in outsourcing development to overseas contractors in environments where enforcing mature SDL practices and vetting of developers may be next to impossible.

My advice: The easiest way to buy untainted and organic, is to keep it local.

Tom Parker is the CTO of FusionX

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0972
Published: 2014-08-01
The kgsl graphics driver for the Linux kernel 3.x, as used in Qualcomm Innovation Center (QuIC) Android contributions for MSM devices and other products, does not properly prevent write access to IOMMU context registers, which allows local users to select a custom page table, and consequently write ...

CVE-2014-2627
Published: 2014-08-01
Unspecified vulnerability in HP NonStop NetBatch G06.14 through G06.32.01, H06 through H06.28, and J06 through J06.17.01 allows remote authenticated users to gain privileges for NetBatch job execution via unknown vectors.

CVE-2014-3009
Published: 2014-08-01
The GDS component in IBM InfoSphere Master Data Management - Collaborative Edition 10.0 through 11.0 and InfoSphere Master Data Management Server for Product Information Management 9.0 and 9.1 does not properly handle FRAME elements, which makes it easier for remote authenticated users to conduct ph...

CVE-2014-3302
Published: 2014-08-01
user.php in Cisco WebEx Meetings Server 1.5(.1.131) and earlier does not properly implement the token timer for authenticated encryption, which allows remote attackers to obtain sensitive information via a crafted URL, aka Bug ID CSCuj81708.

CVE-2014-3534
Published: 2014-08-01
arch/s390/kernel/ptrace.c in the Linux kernel before 3.15.8 on the s390 platform does not properly restrict address-space control operations in PTRACE_POKEUSR_AREA requests, which allows local users to obtain read and write access to kernel memory locations, and consequently gain privileges, via a c...

Best of the Web
Dark Reading Radio