Vulnerability disclosure has always been prone to melodrama.
Arguments about what really is "responsible" disclosure. Web security researchers being dragged off in handcuffs for "knocks on the door," while software security researchers gleefully post proof-of-concept exploits publicly. Vulnerability researchers rallying to the cry of "no more free bugs," while software vendors waffle between "sure, we'll pay you," "no, but we'll send you a nice thank-you," and "that's extortion."
Over the years, the security and software industries have developed some better ways to work together: software bug bounty programs and corporate policies authorizing third parties to hunt for vulnerabilities in their websites, for example.
It hasn't all been forward progress, though. Recent events show that there's still a ways to go; will these be isolated incidents or new trends remains to be seen.
Public Spats Between Tech Giants
Jan. 11, Google's Project Zero publicly disclosed an unpatched vulnerability in Microsoft software. They'd privately disclosed it to Microsoft and given them 90 days to patch it. When Microsoft passed that 90 days, Project Zero published the vulnerability, complete with proof-of-concept code, instead of agreeing to Microsoft's request for a two-day extension that would give them until Patch Tuesday. This was the second time in two weeks that Project Zero had released an unpatched Microsoft vulnerability.
Microsoft was displeased. In a blog post, senior director of the Microsoft Security Response Center Chris Betz wrote: "Although following through keeps to Google's announced timeline for disclosure, the decision feels less like principles and more like a 'gotcha,' with customers the ones who may suffer as a result."
Google responded by publishing yet another unpatched Microsoft vulnerability less than a week later.
Javvad Malik explained the whole sordid affair in hilarious fashion in the video below, concluding "the security industry needs to just mature and grow up and find ways that they can find stuff quicker and better, together":
Paying Known Cyber-Criminals
Last week, a fraud detection firm reported that a hacker named "Mastermind" was advertising on the black market, looking for buyers for 20 million user records he (or she) had stolen from Russia-based dating site Topface.
So, Topface tracked Mastermind down, and offered him a sweet deal. They got Mastermind to agree to cease selling the stolen data, and in return, as Topface chief executive Dmitry Filatov told Reuters, "We have paid him an award for finding a vulnerability and agreed on further cooperation in the field of data security."
Filatov did not disclose the sum they paid to Mastermind. Regardless, Topface's largesse is surprising, especially considering that they say the thief only took email addresses, not passwords or message content. (But it might have -- the fraud detection firm reported that the cache of stolen data included 20 million "credentials" -- including 7 million from Hotmail accounts and 2.5 million from Yahoo and Google main accounts.)
There's certainly an argument to be made for trying to convert black hats into white hats. There are even arguments to be made for paying ransoms to criminals who request them (which this criminal did not). However, offering a criminal cash and consulting work still sets a dangerous precedent. Especially if Mastermind does not stick to the agreement.
Yet, Filatov is confident that he will. From Reuters: "But Filatov noted that the ads have already been removed and Topface has agreed not to pursue charges against the unidentified individual. 'As we made an agreement with him we do not see any reason for him to break it,' said Filatov."
Even More Complicated Laws
Jan. 20, President Obama announced new proposed cybersecurity legislation that is well-intentioned, but misguided. Among other things, it calls for expansions to the Computer Fraud and Abuse Act's definition of "exceeding authorized access," which could further stifle the work of vulnerability researchers.
As Jeremiah Grossman, of web security research firm WhiteHat Security, told DarkReading's Ericka Chickowski, "What the proposed legislation would do is criminalize professional routine security research that’s been crucial in protecting companies and citizens at large. This outcome would be disastrous."
Added Jonathan Cran, vice president of operations at the bug bounty program firm Bugcrowd, "If passed, it will have a broad chilling effect on security researchers while the courts sort out the definition."
What do you think? Will arguments between Google and Microsoft, bonuses to cybercriminals, and broader legislation improve infosecurity for everyone, or just make the entire security industry look bad? Let us know in the comments below.Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ... View Full Bio