This breaking story just in:
Way too many of these kinds of vulnerability announcements go by in one week on mailing lists such as bugtraq, vulnwatch, and security tracker. Without security researchers to keep vendors in line, we would not make anywhere near as much progress in security. But in many cases security researchers seem only to be in the game for fame and glory, making things less secure as vendors scramble to fix problems. How much disclosure is enough? Do we really need it?
Vulnerability Pimps Suck
In a recent paper titled "Teaching an Old Dog New Tricks," security guru Marcus Ranum argues that independent "security researchers" who spend their time constantly looking for security bugs are a drain on the security community. He even has a name for these people: vulnerability pimps.
He thinks that if these people were really serious about security they would join product security teams at the vendors and eschew their 15 seconds of fame on CNN or at DEFCON.
Marcus does not approve of releasing any information at all about bugs that will place people at risk, regardless of other reasons. And he practices what he preaches. When he recently used Fortify to discover a number of exploitable buffer overflows in the venerable fwtk firewall toolkit (which he helped to create back in the Pleistocene era), he didn't gleefully run to the press or even write the exploits up for bugtraq. Instead, he contacted the owners of the appropriate modules and told them what he had found.
As he says:
Contrary to the ideology of the "full disclosure" crowd, everyone I contacted was extremely responsive and assured me that the bugs would be fixed and in the next release; No hooplah, no press briefing, no rushing out a patch. I won't get my fifteen minutes of fame on CNN but that's all right. I'd rather be part of the solution than part of the problem.
Vendors Need a Little Push
I don't completely agree with Marcus, probably because of my own security past. In the mid '90s when I was working with the Princeton team to break Java, we had a hard time getting Sun, Netscape (remember them?), and Microsoft to take our discoveries seriously. We learned that only through a careful process of very public disclosure could we get them to acknowledge and, more importantly, fix the serious security problems that we constantly uncovered.
Over time, the companies began to understand that we were serious about security and when we called they paid attention. But in the beginning when we were "anonymous," they often had better things to do.
The last thing on our minds back then was coverage in the Wall Street Journal. We came to find that the press could be used to leverage huge corporations into responsible action where they would otherwise turn a deaf ear.
Where we do agree with Marcus is in the nature of disclosure. We were very careful not to release information that would allow rampant attacks on unfixed systems. This was pretty easy to do. Instead of publishing exploit code, we published detailed descriptions of the vulnerabilities and only shared the exploits themselves with the vendors.
Of course a determined attacker could have taken our explanations and reconstructed working exploits, but in 10 years of very public exposure, this never happened once. It turns out that most malicious hackers are lazy buggers! What did happen was that many people learned important lessons about software security and mobile code, from two critical perspectives -- attack and defense.
The problem with Marcus' theory is a naïve belief that vendors will always do the right thing given the opportunity. Sometimes they just don't. Some particularly important security researchers doing essential work in software security know this, and they are more than happy to use public disclosure and the power of the press to move vendors in the right direction:
Proper Security & the Dark Arts
I designed a modified yin-yang brand for the cover of my book Software Security: Building Security In (now the logo for Addison-Wesleys Software Security Series which I edit). This is not just geeky eye candy -- there's a philosophical approach to security implied by the image.
The yin-yang design is the classic Eastern symbol used to describe the inextricable mixing of standard Western polemics (black/white, good/evil, heaven/hell, etc.). Eastern philosophies are described as holistic because they teach that reality combines polemics in such a way that one pole cannot be sundered from the other.
In the case of software security, two distinct threads -- black hat activities and white hat activities (offense/defense, construction/destruction) -- intertwine to make up software security. A holistic approach, combining yin and yang, black hat and white hat, is required.
In my view, security absolutely requires a mix of destructive and constructive activities. Destructive activities are about attacks, exploits, and breaking software -- the kinds of things that vulnerability pimps do. These kinds of things are represented by the black hat (offense). Constructive activities are about design, defense, and functionality -- the kinds of things that vendors should be concentrating more resources on. These are represented by the white hat (defense). Both hats are necessary.
Ultimately we need disclosure, and there is a clear role for security researchers. Meanwhile, I have to get back to patching the Gary McGraw automaton.