Perimeter
10/28/2012
05:17 PM
Tom Parker
Tom Parker
Commentary
Connect Directly
RSS
E-Mail
50%
50%
Repost This

Supply Chain Woes: Human Error Or Something Else Entirely?

How easy are plausibly deniable bugs really introduced to the supply chain, and are recent fears concerning foreign technologies more hype than fact?

A thought that has lingered in my mind since the industry's annual jaunt to Las Vegas this year has been the idea of plausible deniability when it comes to infiltrating the supply chain and introducing malicious code into software and, in particular, protocol implementations. FX and the_gerg presented some great research relating to vulnerabilities in Huawei equipment. Their talk concluded with a slide suggesting that you get what you pay for.

But do you? Would Huawei equipment be that much more secure at even twice or three times the market price? I've heard a number of reasonable arguments on both sides of the debate -- on one hand, regarding the plausibility of concerns about Huawei and ZTE, and on the other, theories that Huawei and ZTE are being unfairly treated. Supply-chain concerns aren't just for those buying commercially available products developed overseas, but should also be a point of concern for corporations outsourcing development to overseas entities.

Arguments as to whether Huawei is having issues with its development practices aside, the general idea of introducing plausibly deniable vulnerabilities into the supply chain is an interesting subject, which, once you begin to think about it, isn't quite so simple.

At FusionX, we're from time to time asked to simulate the subversion of a client's internal and external supply chain -- often involving the modification of internally developed programs that are subjected to some form of SDL process. First of all, you need to conceive bugs that could both be plausibly introduced through human error, and will pass muster when the code is subjected to security reviews. So, yes, we're not just talking about a simple stack-based strcpy bug that's going to get spotted by humans and source-code analysis products. Chances are if you're going to pull this off, you're going to want as few people as possible knowing about your little skunk works project.

In any large software or hardware development firm, there's a pretty high chance that the code you commit is going to undergo some kind of review. In addition to internal reviews, many large vendors will make at least partial source code available to their biggest partners and customers for numerous reasons (such as product integration). With this in mind, I personally doubt that the bugs discussed in Vegas were put there intentionally. They were were likely the result of poor internal process.

With regard to the bugs themselves, embedded devices, such as routers and switches, infrequently have the native ability to protect themselves against the exploitation of memory corruption-based vulnerabilities (such as the protections that exist in many modern enterprise-operating systems and hardware). Therefore, when exploiting your bug, you're less likely to have to deal with things like memory randomization and canaries; you're most likely going to want to introduce a bug that doesn't involve any kind of memory corruption. For one, the cost of failure when exploiting memory corruption flaws in the embedded world is likely going to give the game away when, in all likelihood, the device locks up or hard resets -- potentially resulting in the loss of configuration data. (We see this all the time in the SCADA world.)

Business logic bugs, on the other hand, are far more attractive because they're (generally speaking) both more difficult to spot and easier to exploit. The drawback here is that if you're introducing a bug in a simple protocol (such as UDP), you're going to have a lot less protocol complexity to hide such a bug within (along with a relatively ridged RFC), as opposed to a more complex and less consistently implemented instrumentation, such as HTTP, for example. A well-thought-through implementation backdoor is also probably going to be multifaceted and involve multiple "quirks" in a given platform architecture, such as leveraging secondary bugs in an architectures security model to achieve an end goal.

Ultimately, the nature of the bug is going to vary based on what you're trying to achieve (device availability, confidentiality, etc.). However, the ease and reliability of exploitation are going to remain a high priority in the design of any implementation backdoor.

In understanding the potential complexity of a well-placed and plausibly deniable bug, it's also hard to see how a security review performed as part of your procurement process can be reasonably expected to spot such a backdoor. The truth is, it can't. Sure, its exploitation might get spotted, but the chances are that by that time it's going to be too late.

This brings us full circle to the concerns recently raised by the House Select Committee on Intelligence. While the committee failed to present a smoking gun, an understanding of how a supply chain influence might be achieved, coupled with a realization of the opportunity for subversion, is going to give any reasonable person the jitters, particularly when it comes to deployments pertaining to national security (and integrity). This equally applies to organizations involved in outsourcing development to overseas contractors in environments where enforcing mature SDL practices and vetting of developers may be next to impossible.

My advice: The easiest way to buy untainted and organic, is to keep it local.

Tom Parker is the CTO of FusionX

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2011-0460
Published: 2014-04-16
The init script in kbd, possibly 1.14.1 and earlier, allows local users to overwrite arbitrary files via a symlink attack on /dev/shm/defkeymap.map.

CVE-2011-0993
Published: 2014-04-16
SUSE Lifecycle Management Server before 1.1 uses world readable postgres credentials, which allows local users to obtain sensitive information via unspecified vectors.

CVE-2011-3180
Published: 2014-04-16
kiwi before 4.98.08, as used in SUSE Studio Onsite 1.2 before 1.2.1 and SUSE Studio Extension for System z 1.2 before 1.2.1, allows attackers to execute arbitrary commands via shell metacharacters in the path of an overlay file, related to chown.

CVE-2011-4089
Published: 2014-04-16
The bzexe command in bzip2 1.0.5 and earlier generates compressed executables that do not properly handle temporary files during extraction, which allows local users to execute arbitrary code by precreating a temporary directory.

CVE-2011-4192
Published: 2014-04-16
kiwi before 4.85.1, as used in SUSE Studio Onsite 1.2 before 1.2.1 and SUSE Studio Extension for System z 1.2 before 1.2.1, allows attackers to execute arbitrary commands as demonstrated by "double quotes in kiwi_oemtitle of .profile."

Best of the Web