But the question remains: how do flaws of this magnitude get into the final product? It's been since 2002 when Bill Gates issued that now famous e-mail that demanded change. (Which I covered in this news story.)
If you give even a cursory read of Howard's post from yesterday, building flawless software is challenging stuff. More challenging than most realize.
Howard first described the nature of the flaw, in technical details that are beyond the scope of this blog. Essentially, the flaw (which would allow attackers to remotely infiltrate at-risk systems by having users do no more than surf to the wrong URL), was an invalid pointer dereference in the MSHTML.DLL library used for data binding. Attackers would be able to exploit the way code is handled when a certain type of binding object was released, and a time-of-check-time-of-use (TOCTOU) programming error would result.
But why wasn't this error found during development, or quality assessment? Microsoft has a rather extensive Secure Development Lifecycle (SDL) in place, as the company likes to often tell us about. Howard explains why, and in doing so, we get a real-world peek into Microsoft's Secure Development Lifecycle (SDL).
During the code analysis and review phase, TOCTOU errors are tough to find, in part because developers weren't necessarily trained to find this specific bug. "We teach memory corruption issues, and issues with using freed memory blocks; but we do not teach memory-related TOCTOU issues," Howard explains in his post. And it's a condition that will soon change: "We will update our training to address this."
That type of candor is so rare in the software industry, and is sorely needed. "We had this system in place that didn't catch this condition, and we're going to fix it" is not something you hear from many vendors in IT.
Note to software developers: your customers appreciate transparency, and when you provide it, they'll be more inclined to trust you. Apple, for instance, has never publicly explained to anyone (not that I'm aware) the details of how it develops secure software. And in this obfuscation we all lose. Perhaps Apple has found some superior ways of handling certain security-related development techniques that, if they were to be shared with other developers, would benefit us all with more secure software. Then there's the flip side: maybe the security process Apple has in place is arcane, and that's why we're seeing more flaws and larger patches coming from Cupertino. We just don't know what reality is.
My apologies for the digression.
Back to the MS08-078 related bug and Microsoft's SDL:
Howard also explained that during the code analysis and review process, the company's normal fuzz testing (using software tools to find software bugs) wouldn't necessarily have worked because there has been no fuzzer test case created for this specific code.
So, there you have it, in the developing stage: developers were not made formally aware of this type of flaw, and fuzzing tools didn't have the right test-cases built to find it. Both conditions are probably because this type of attack never registered on the radar screen before.
He also explained how there are many defenses, to different degrees, within various versions of Windows. But because of the nature of this flaw, only a handful mattered. For instance, a certain switch (-GS) is mandated to be run when Microsoft code is compiled -- but this only works against stack overruns, not TOCTOU flaws. Also, both IE7 and IE8 have methods to terminate corrupted heaps, but this was not a corrupted heap attack. Next up, he explains how ASLR and NX (which are good at stopping certain types of memory attacks) could mitigated attacks on this flaw, but maybe not entirely defeat them.
This shows that we all, naturally, when conducting security training and testing tend to focus on the flaws that are perceived to pose the greatest risk because of the nature of the vulnerability, or the popularity of attackers exploiting them. Oftentimes organizations gauge this risk by both the potential impact of a successful attack, and the popularity of the "hacking" community to exploit it. But when developers focus on driving certain types of bugs to extinction, and defensive tools make certain types of attacks more difficult, attackers will move their attention elsewhere. Perhaps to (until now) relatively obscure TOCTOU attacks.
That's why, no matter how hard vendors try, situations like this are bound to continue for a long time, from time to time.
That brings us to IE 7 and IE 8 running on Vista in protected mode. It's my opinion that Protected Mode does an elegant job at protecting operating system resources from attack. As Howard notes: "When the exploit code runs, it's running at low integrity because IE runs at low integrity, and this means the exploit code cannot write to higher integrity portions of the operating system, which is just about everywhere!"
For our server platforms, Windows Server 2003 and Windows Server 2008, Internet Explorer Enhanced Security Configuration also prevents the exploit from working because the vulnerable code is disabled."
Howard concludes that you'll never get code 100% "right." And on this
I agree: there are just too many inter-connected parts, with each having its own set of springs, gears, and levies they interact with. Which brings us to the headline question: Has Microsoft gotten anywhere with its Trustworthy Computing initiative?
I think Internet Explorer 7 and higher, running in protected mode on Vista, is the most secure browsing experience Microsoft has ever created. And when it comes to software development practices, the company has come a long way since Gates' company-wide e-mail sent to every employee in the company, nearly seven years ago. In addition, I'm more comfortable with the Microsoft that has grown to be more collaborative with the IT industry and information security community, and the Microsoft that is more transparent about its mistakes.