This is the kind of stuff that happens when guys who build atomic bombs are set loose on computer software.
But how the attackers actually broke Windows Update is beside the point. This not-so-simple attack now puts a critical anchor of your security operations into question. We've all been taught one of the key fundamentals of security is to patch early and often. You probably constantly remind the ops guys that the day after Patch Tuesday is Exploit Wednesday. You read the anecdotes from breach reports showing many successful breaches resulting from attacks on software where patches were already available (sometimes for months). When updates ship from key vendors, you trust them and you apply them. Right?
Even better, Flame was an ingenious attack vector because most of your traditional controls give a pass to patches/updates. Your HIPS and/or endpoint protection sees registry updates, system files, and configuration changes from patching processes, and allows them. That's what patching does, right? Yeah, that's also what malware does. The only difference is you trust updates you get from the trusted software vendor.
What happens when you can't trust the update? The patching emperor no longer has clothes. They were incinerated by the determined attackers with unparalleled resources. Now you start questioning your patching/updating process. And you should be questioning. As if your job wasn't hard enough, now you need to do malware analysis on signed updates from "trusted" vendors? Awesome.
Now to be clear, analyzing patches isn't a radical departure from the patching process Securosis defined a few years back. In that research, Rich Mogull laid out a 10-step process, and one of the steps was to test and approve the patch. Before Flame, that meant making sure it didn't break anything. Now it means you have to also check for malware-like characteristics in updates.
Or not. Let's be realistic that most organizations don't have the resources to keep up with all the patches in the first place. They are underfunded, under-resourced, and don't have the expertise to deal with simple malware available in the hundreds of kits out there (if you know where to look). They spend most of their time cleaning up the mess of frequent malware infections, so even if they were concerned about the undermined trust of a patching process, they don't have the resources to change anything.
On the flip side, organizations that have a detailed and careful updating process likely already do some level of analysis on the updates. These folks are probably the organizations targeted by sophisticated attackers using advanced attacks. They have in-house malware analysis capabilities, and it's just a matter of having those folks analyze the patches (like they do hundreds of other files a week). Sure, this adds another step in the patching process and maybe extends the patching window a few extra days, just to make sure the updates are legit. These organizations also implement workarounds (like IPS rules) to block new patch-oriented attacks until devices can be updated, anyway.
So does Flame really change the game? Not really. Organizations that have the resources now can add another step to their patching processes to protect against an attack vector unlikely to be used by the attackers you need to worry about anyway. In that way, security is like flying in the U.S. The TSA adds a bunch of ineffective controls to stop an attack vector that the bad guys won't use again. Rock on, security theater.
My bottom line is a bit more pragmatic. It's just a reminder that determined attackers will break your stuff. Like with the RSA attack targeting the tokens used by the real targets, nothing is out of bounds. Any controls you use can and will be gamed. And once again, it highlights the need to balance your defenses with extensive monitoring to shorten the window between when the breach happens and when you know it happened.
Mike Rothman is President of Securosis and author of The Pragmatic CSO