When too much of a good thing causes confusion and setbacks

Gunter Ollmann, CTO, Security, Microsoft Cloud and AI Division

September 21, 2013

5 Min Read

There's a saying that too much of a good thing can be bad for you. We normally apply it to things like ice cream and chocolate, but the saying also applies to the threat intelligence world. You'd think that by doubling or even quadrupling the number of streaming intelligence feeds into your organization you'd be better off -- better informed and more secure. Unfortunately, you're likely to be wrong.

During the past couple of years, the threat intelligence service industry has really kicked into high gear. Many of the vendors in this area have been supplying their streaming intelligence services for upward of a decade to the manufacturers of popular security appliances and desktop protection suites, but it has only been more recently that enterprise businesses have found themselves in a position to consume the data directly.

The growing need for streaming security intelligence is a direct response to the rapidly evolving threat. As the threats that target an enterprise become more adaptive, more dynamic, and more evasive of legacy protection architectures, there's a driving need for real-time analytics and providing inputs into a new generation of dynamic analysis systems. To this end, the common logic is "more is better" when it comes to threat intelligence. But is it?

Last week, I came across an opinion piece at SC Magazine by Kathleen Moriarty (global lead security architect, EMC's office of the CTO), titled "Threat-intelligence sharing is dead, and here's how to resuscitate it," in which she touches on the problems of sharing intelligence data and using it effectively. While I agree with her that contemporary threat intelligence sharing has failed (and, by the way, is increasingly a target for subversion) -- in particular, that those participating in threat-intelligence programs have suffered from too much information, and that they struggle to deal with information that is neither actionable nor relevant -- I believe the requirement to rely on trusted parties is likely doomed to failure. "Trust" networks, if ad-sharing networks are any indicator, are an open invitation to new attack vectors.

The biggest problem that enterprise threat-intelligence customers are facing can be illustrated by the problem any of us would encounter is we were placed in an office surrounded by televisions each blaring away a separate TV news channel, and were expected to absorb and digest the days happenings. Too much information is overwhelming. Adding additional TVs and news broadcasts only adds to the problem.

But another analogy can be drawn from the same TV news illustration. You'd think things would become simpler if there's a late breaking story that most of the channels then start covering at the same time. The simultaneous coverage is likely an indicator that something significant is happening and should be responded to.

Two significant wrinkles with this approach spring to mind. If the majority of the TV channels are covering the same national story, then what stories are not being covered? While they're all repeating the same news -- confirming among themselves the significance of the story -- other local stories are being dropped from the day's coverage. And then, as with practically any late-breaking story of significance, the TV channels -- each searching for new "facts" and unique commentary -- often end up repeating each others' facts (sometimes providing attribution to a competitor if they can't confirm it for themselves).

In the threat-intelligence community, what you end up with is a myopic fixation on the high-profile threat of the day (e.g., the latest APT that has made it to the news) to the detriment of other analysis and, I'm sorry to say, a framework that can be easily tainted by bad or mistaken information. There's so much pressure on the various threat-intelligence providers to provide like-for-like coverage of competitor feeds that each vendor subscribes or monitors the other and will often add any missing intelligence data to their own feed, even if they can confirm it for themselves. This already happens daily among the dozens of blacklists and antivirus signature vendors.

The problems facing streaming threat intelligence feeds, their vendors, and their consumers are many and (unfortunately) endemic throughout the current intelligence-sharing model. Luckily, a new generation of machine-learning and clustering systems is making great headway in consuming the threat intelligence feeds from a bloating industry -- weeding out superfluous and inaccurate information -- and pre-emptively classifying threat categories, such as botnets and related domain abuse, but is still years away of forming the basis of prioritizing actions against the full breadth of today's threat spectrum within the enterprise.

The incestuous nature of the streaming intelligence service industry causes many problems, but also new opportunities. While those responsible for safeguarding their corporate networks are overwhelmed with inactionable information from an avalanche of intelligence data, there is ample opportunity for boutique service providers to step in and provide distilled threat intelligence advice specific to their clients' needs.

As kids, we've probably all dreamed about having a humongous bowl filled with every flavor of ice cream imaginable and consuming the whole thing until we exploded. As an adult, I've learned that the strategy of first asking the girl on the other side of the counter which flavored ice creams are the best in the store is often a more efficient and less explosive way to enjoyment.

Gunter Ollmann, CTO, IOActive Inc.

 

About the Author(s)

Gunter Ollmann

CTO, Security, Microsoft Cloud and AI Division

Gunter Ollmann serves as CTO for security and helps drive the cross-pillar strategy for the cloud and AI security groups at Microsoft. He has over three decades of information security experience in an array of cyber security consulting and research roles. Before to joining Microsoft, Gunter served as chief security officer at Vectra AI, driving new research and innovation into machine learning and AI-based threat detection of insider threats. Prior to Vectra AI, he served as CTO of domain services at NCC Group, where he drove the company's generic Top Level Domain (gTLD) program. He was also CTO at security consulting firm IOActive, CTO and vice president of research at Damballa, chief security strategist at IBM, and built and led well-known and respected security research groups around the world, such as X-Force. Gunter is a widely respected authority on security issues and technologies and has researched, written and published hundreds of technical papers and bylined articles.

Originally, Gunter had wanted to be an architect but he lost interest after designing retaining walls during a three-month internship. After that, he qualified as a meteorologist, but was lured to the dark side of forecasting Internet threats and cyberattacks. His ability to see dead people stoked an interest in history and first-millennium archaeology.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights