Security Bugs And Proofs Of Concept
Oracle's recent patch contained exploit code
Along with Oracle's recent patches for the "Security Fix: Bug #13510739" MySQL issue, a proof of concept (PoC) to excise the bug was included with the release bundle.
The bug, along with the included PoC, allows an attacker to issue a command that causes the MySQL engine to hang. It's not destructive, per se, but it will certainly cause the database to come to a halt. Any of you DBAs out there who have new DBAs/programmers on your staff know that a badly written query can do the same thing, but a general-purpose command that works against all unpatched MySQL databases could be a problem -- especially if your bonus is tied to "uptime."
More Security Insights
- Forrester Study: The Total Economic Impact of VMware View
- Securing Executives and Highly Sensitive Documents of Corporations Globally
- Top Big Data Security Tips and Ultimate Protection for Enterprise Data
- Smarter Process: Five Ways to Make Your Day-to-Day Operations Better, Faster and More Measurable
The inclusion of test code is not unusual. It's entirely normal -- in fact, it's encouraged for software development teams to write PoC code to illustrate bugs and then use that code during regression testing to verify that bugs are indeed fixed. As companies fix problems, be it security or general bugs, they need PoC code for the purpose of verification.
But that's where the similarities between a normal bug and a security bug stop, at least from the perspective of software development organizations. When it comes to security, the general attitude is that sharing sample code merely educates the unwashed masses on how to exploit a vulnerability that they themselves are incapable of producing.
You'll notice that even the vocabulary software firms use when it comes to security issues changes: "Bug" becomes "Vulnerability," "Test" becomes "Attack,""Proof of Concept" vs. "Weaponized Exploit," and so on. Never let it be said that software firms are not effective marketers, because they've single-handedly remessaged security-related defects. Granted, bugs used to be an issue between software users and their vendors; security bugs allow third parties to participate, generating even more mayhem than the simple lack of intended functionality. But this added element of chaos has really pushed software quality issues into mainstream media, forcing software development firms to take security seriously, and, overall, the industry produces better software with more formalized testing and release management processes.
In this specific case with Oracle, leaking PoC code with the patch should not be a big deal, except for select firms that use MySQL in public-facing production environments and fail to patch in a timely fashion. Under these conditions, they run the risk that the defect will be discovered, the database will be brought down, and the customer will be forced to patch before operations resume. In other cases, the public disclosure could have put organizations under the gun to patch immediately lest their servers go down.
I am certain that the bundling of sample test code was a simple mistake, but that poor engineer will cause a lot of heat inside Oracle, especially given Oracle's posture against disclosure. (Read a recent post titled "Pain Comes Instantly," where Mary Ann Davidson argues against disclosure -- even to partners or customers.)
But I think it's worth raising a couple of issues here concerning database security, and feel that this disclosure cannot easily be characterized as good or bad. First, there is the general idea that no one else outside of the software firm and the customer that reported the issue knows about the security bugs. The fact is that we're finding out, through various APT attacks, black hat disclosures, and breaches that people outside the software development companies are already aware of these issues and disclose as it suits their needs.
Second, there are good reasons to share these PoCs with their customers: They can modify these test cases to their organizations to verify they are not vulnerable, or create rules/policies that can be used to block the specific attack signatures. For every argument against sharing, I can come up with another in favor of it. Equally ironic, the damage caused by disclosures seems important to select companies in the short term, whereas over the long term, sharing information on threats and vulnerabilities appears beneficial to the community at large.
I think we are going to hear a lot more about this in the coming months as I receive more rumors of known database exploits that have yet to be patched.
Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading.