Perimeter
12/4/2011
01:29 PM
Tom Parker
Tom Parker
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Debunking The Conficker-Iranian Nuclear Program Connection

Recent claims allude to Conficker-Stuxnet relationship, but are they really credible?

Toward the end of last week, a number of industry colleagues pointed me in the direction of some emerging news stories that cited the purported research of John Bumgarner of the U.S. Cyber Consequences Unit in Washington, D.C. The numerous articles covering his research suggest that the Conficker worm was unleashed by the perpetrators of Stuxnet in order to provide an initial entry vector onto Iranian systems located at the Natanz nuclear fuel enrichment plant, in addition to providing a smoke screen designed to “mask” the real nature of the operation.

Further, reports citing Bumgarner’s research go on to claim that when Conficker was released, its primary mission was to identify IT assets that were “strategic Iranian facilities” and mark them accordingly, presumably for later seed infections of Stuxnet.

Wow. Where to begin.

I’ll first say I’m a big fan of well-grounded research that helps further our understanding of the supply chain that led to the creation of notable threats. Back at Black Hat USA 2010, my Stuxnet talk was actually more about methods through which we can identify relationships between components of a given threat in order to bring us closer to a profile (or even the identity) of those responsible for its creation. Ironically, Conficker was actually one of the other samples I’ve frequently used to provide a contrast to Stuxnet, specifically when it comes to the quality of the code.

What I essentially demonstrated was that while much of Stuxnet was highly sophisticated, it had several components that, unlike Conficker, were really poorly written. I digress.

Many alternative Stuxnet theories have been presented since it emerged into the public domain in July 2010. But this one particularly caught my attention because it plays on some common misconceptions that still remain among the status quo with regard to Stuxnet. As reported by the press, Bumgarner’s theory appears to be hinged on the following premises:

Conficker was used as a smokescreen and intended to “hunt down” assets associated with the Iranian nuclear program, doing no damage to infected systems: Many fail to grasp the idea that anytime a system becomes infected with an unknown component, many organizations will quite correctly consider it to be no longer trusted, and therefore require an effort to remove the infection, or more likely completely reinstall the impacted asset. This costs money, and regardless of whether the threat was actually proactive in causing damage, any infection is, by nature, damaging. At the high end, Conficker is estimated to have potentially infected as many as 35 million devices. That’s a lot of collateral damage when you consider clean-up efforts and other secondary costs associated with responding to the threat. Even if you don’t buy into this, as Rik Ferguson at Trend Micro correctly points out, Conficker was leveraged to both manifest botnets and spread fake antivirus software to victims systems, which, in turn, were used for other nefarious purposes.

Both Stuxnet and Conficker demonstrated significant technical sophistication: It's true that Conficker and Stuxnet donned features that were either comparatively sophisticated or wholly without precedent. Other technical similarities also exist, including use of MS08-67. However, Stuxnet and Conficker have more differences than they do commonalities. One of the major issues that plagued Stuxnet was its use of a highly trivial and fragile command-and-control (C&C) mechanism (something that Duqu improved on significantly).

Conficker, on the other hand, utilized a much more sophisticated C&C mechanism and significantly more robust update functionality through its use of cryptography. Further, various code quality metrics that I ran back in 2010 clearly demonstrated that there is an extremely low likelihood that either threat were authored by the same group of individuals.

Conficker was Stuxnet's “door-kicker”: Quite simply, Stuxnet didn’t need a “door kicker,” especially not at the cost of tens of millions of Conficker infections. We know that the authors of Stuxnet had intimate, insider-level knowledge of and likely physical access to the Natanz fuel enrichment plant. Is there a chance that a number of Stuxnet-infected systems were also coincidentally infected with Conficker at some point in time? Of course! Is it likely that that’s how Stuxnet got there? Bzzzzt.

I’ll close by saying that neither John Bumgarner nor the U.S. Cyber Consequences Unit have released any formal, technical research papers supporting a Stuxnet/Conficker link, but I would absolutely urge them to do so should they remain confident in their theory. Until then, though, this one is going into my growing pile of Stuxnet conspiracy theory fails.

Tom Parker is Chief Technology Officer at FusionX.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0607
Published: 2014-07-24
Unrestricted file upload vulnerability in Attachmate Verastream Process Designer (VPD) before R6 SP1 Hotfix 1 allows remote attackers to execute arbitrary code by uploading and launching an executable file.

CVE-2014-1419
Published: 2014-07-24
Race condition in the power policy functions in policy-funcs in acpi-support before 0.142 allows local users to gain privileges via unspecified vectors.

CVE-2014-2360
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules allow remote attackers to execute arbitrary code via packets that report a high battery voltage.

CVE-2014-2361
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules, when BreeZ is used, do not require authentication for reading the site security key, which allows physically proximate attackers to spoof communication by obtaining this key after use of direct hardware access or manual-setup mode.

CVE-2014-2362
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules rely exclusively on a time value for entropy in key generation, which makes it easier for remote attackers to defeat cryptographic protection mechanisms by predicting the time of project creation.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Sara Peters hosts a conversation on Botnets and those who fight them.