Talking 'Bout My Reputation
When good security monitoring means not believing everything you're told
Imagine it's next year. OK, imagine a few years from now -- probably not as soon as some might hope -- when we all have monitoring systems that can automatically react based on a number of conditions. And let's say that some of these conditions are based on the output of threat intelligence feeds.
Threat intelligence feeds are growing in number almost daily. Nearly every IDS/IPS, anti-malware, and firewall vendor has one: It starts within the company's own research lab, and then the research is productized, first for direct consumption by its own offerings, and then perhaps for integration with other vendors. SIEM and analytics vendors can either consume externally generated threat intelligence data, or put their own together based on whatever data the customer is collecting. And finally, there are dedicated threat intelligence vendors that build extensive sensor and honeypot networks and use their own proprietary analysis to build a feed. Some vendors collect data; One forensics tool I know uses as many as 96 sources of intelligence feeds for its dashboard.
More Security Insights
- Forrester Study: The Total Economic Impact of VMware View
- Securing Executives and Highly Sensitive Documents of Corporations Globally
- Top Big Data Security Tips and Ultimate Protection for Enterprise Data
- Client Windows Migration: Expert Tips for Application Readiness
One large component of intelligence feeds is an overall score for any given entity that is tracked, usually an IP address or range. These might be further tagged by geolocation, industry of the registered owner of the range, and so on, but, in general, the fundamental attribute is an IP address. If bad activity is seen to come from a particular address -- for example, being part of a botnet -- the IP is given a "worse" reputation score. And in an ideal world, any prevention system that used this reputation data would be able to react automatically, for example, by blocking access or alerting a human somewhere.
There's just one problem: IP addresses do not directly correspond with the bad actors behind them. I can take my laptop, go to different locations and have different IPs assigned each time. I'm still the same actor, with the same laptop, but I can be coming from a hotspot in Peoria, an office network in San Francisco, my home network, or my brother's house. And each of those points of presence will have amassed a reputational score that has nothing to do with me. However, I can sure leave a trail of destruction behind me if I behave badly enough.
With great reputation ranking comes great responsibility. How quickly can you get off someone's blacklist after you clean up an infection on your network? I have seen organizations unable to send email for hours or days because they couldn't convince an anti-spam vendor quickly enough that there had been a mistake, or that they were really okay now. It's not necessarily a case of false positives; they might OK have been compromised, but that state existed only for a point in time, and the "bad bit" needed to be cleared.
So the more automatically you respond based on a rating score, the more quickly you need to be able to change that response based on new data. Otherwise, I can't think of anything easier than to create a denial of service attack by generating network traffic to damage a reputation. Why DoS someone directly when you can get scores of intelligence feeds to do it for you?
Call it a reflective reputation attack, or blacklisting by proxy. If enough networks start using the combination of intelligence feeds with automated blocking, it might be the next new form of harassment. If you know all of the factors that go into a rating score, can generate activity to manipulate them, and can spoof the target's IP range, you can at least tie your victim up in a form of bureaucracy while it gets its name cleared by each feed vendor in turn.
So the most important thing about this kind of monitoring is still how you identify and deal both with false-positives and temporary-positives. If you're going to rely on external, automated intelligence, then you need to make sure you keep some in-house intelligence available as well. Plan for failure, and plan for change.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.