When Monitoring Becomes A Liability
The combination of 'bigger data' and 'more intelligence' could lead down a path that creates problems for the enterprise
Organizations generally want to keep breaches under wraps, or at the very least to control the release of any news about them. When you have mandatory reporting laws in place, it can simply motivate you either to monitor less -- what you don’t know, you can’t report -- or to take longer to decide that it’s really a compromise that needs to be reported.
Here’s an example: If you discover that some Social Security numbers were theoretically accessible on an Internet-facing Web server, but you have no logs to figure out whether they were ever accessed, then what do you do? Is it a breach, or isn’t it? Does it matter whether they were there for an hour, or a day, or a month? If something confidential is accidentally published and the mistake is caught right away, then most organizations are simply going to go, "Oops," take it down, and say no more about it. (If you think this is shocking and scandalous, you don’t understand your business.)
But there’s a growing problem: Not all the indications of a security issue are under the control of the enterprise itself, and not all of them are subject to interpretation. One practice that is very common is the externally mandated audit or vulnerability assessment: where an external authority is empowered to examine and report on your security controls, or even pen test you, and publish some form of report. While you may argue that allowing SSL 1.0 doesn’t represent any kind of significant security risk, it’s not going to convince the auditor to drop it from the checklist. And in publicly available audit reports (such as the ones in the public sector), descriptions of findings are kept intentionally vague so as not to give clues to would-be attackers.
But this can also mean that "there is a weakness in transaction security" actually translates to "still allows a few remaining ancient browsers to use SSL 1.0." And the organization in question probably won’t be able to explain the real story.
Debating the seriousness of a given vulnerability is one thing; after all, having that vulnerability doesn’t necessarily mean it’s being exploited. But more unambiguous indicators are out there for anyone to find, such as membership in a botnet. If something in your IP address range is talking to a known command-and-control center, then at least at one level you’ve been 0wn3d, and you can’t explain it away with a +5 Wand of Pragmatism.
Not only is botnet membership publicly available for anyone who cares to look -- a lot more are caring to look now. Threat intelligence is growing at a steady pace, and the data is coming not just from a vendor’s product logs, but from honeypots and sensors deployed across the Internet. Several companies will now offer to tell you if you’ve been compromised by searching through their very large stores of data for your IP addresses; others can also monitor Pastebin, IRC, and other areas for any data related to your company.
For right now, at least, this sort of threat intelligence is governed by a gentlemen’s agreement that any indications of a breach will be supplied to only the affected party. But how long will it stay that way? We already have regulating authorities that would probably be very interested in knowing whether a financial institution, government agency, or healthcare provider actually has compromised machines -- and they might have the legal right to know. There is nothing to stop an unaffiliated party from gathering its own botnet membership information and publishing it (except, perhaps, the threat of lawsuits). Is the release of publicly available information illegal?
We’re not there yet, but the Wikileaks-style data exposure trend may well extend to general breach disclosure that organizations will have no way to stop. Naming and shaming could become a lot more widespread: "The National Bank of Freedonia has had at least four systems in a botnet every day for the past six months." And it could become shorthand for indicating how secure an enterprise is -- a breach index, if you will.
The more security intelligence data grows, and the more we can do with it, the greater the chances become that it could be a double-edged sword. Sometimes it’s possible to know too much.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024