Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News & Commentary

6/18/2016
10:00 AM
Nathan Burke
Nathan Burke
Commentary
50%
50%

The Gamble Behind Cyber Threat Intelligence Sharing

In theory, sharing threat intel makes sense. But in cybersecurity you're not dealing with known individuals, you're dealing with anonymous adversaries capable of rapid change.

The U.S. Department of Homeland Security deployed its Automated Indicator Sharing (AIS) system in March to enable the exchange of cyber threat intelligence among private and public organizations. Their motivation is clear: to increase the breadth and speed of information sharing in order to help all types of organizations act more quickly and better defend themselves against emerging threats.

The concept of sharing information to fight common adversaries is nothing new. It’s similar to the Griffin Book or the blacklist used by casinos to identify known cheaters. The casinos share information on shady characters with the gaming board or Griffin Investigations, and they disseminate that information to all casinos so they can identify and ban cheaters. It's a great idea – share the intelligence and everybody (but the cheater) wins. And in the case of casinos fighting criminals in physical locations, it makes all the sense in the world.

There is almost unanimous agreement among security professionals that cyber threat information is similarly valuable to their organizations. However, digging deeper into the attitudes and implementation barriers to sharing that information unveils myths and significant reticence that make it a lot less simple than it might sound.

The known but evolving threat

In theory, the idea of sharing threat intelligence to fight cybercrime makes a lot of sense. But the problem in cybersecurity is that you're not dealing with known individuals, you’re dealing with anonymous adversaries that are capable of rapid change.

In casinos, if Dom “Roll Star” Spinale decided to get a new name and bag the aviator sunglasses, chances are he'd still be recognizable – and most likely sitting at a craps table. When you bring the same approach to the cybersecurity world, you don't have a photo, a name, a defined M.O. or any other clues as to who is on the other end. Instead, you have the type of attack (malware, phishing, ransomware), an IP range and maybe an email. So what you’re sharing are file hashes, IPs to block, and maybe known email addresses. All really important, but clearly not as easy to pinpoint as a specific person with a set of defined habits or physical features that are impossible to mask from facial recognition technology that many casinos use today.

Additionally, unlike in casinos, cyber threats are capable of multiplying and morphing at a breakneck pace. As everything shifts to digital, there’s a seemingly endless supply of new threats and possible attack vectors. Monitoring for and blocking known threats is something that every organization should be doing, but it’s not as effective as it once was. By the time a threat signature is identified and shared, there’s ample opportunity for it to be masked again. And if the bad guys are getting the same threat feed as everyone else – which happens – they have real-time visibility into whether they're getting caught and can quickly change their tactics. We’re giving them real-time feedback into what’s working and what’s not so they can adjust on-the-fly.

Lingering fears

Without a doubt, threat intelligence sharing takes on a new level of complexity when we start talking about malicious systems instead of people. While many organizations want to benefit from collective threat intelligence, they are reticent about sharing back. This makes perfect sense when you consider the potential consequences of sharing attack information that could be connected back to a company. Were that to happen, you'd be broadcasting to the world that:

  1. My company was targeted in an attack that made it through my detection systems.
  2. The vulnerability that precipitated a breach is an open door to everyone else.
  3. I may now be required to report this breach.

Add to that the fact that sharing threat intelligence could also be informing the bad guys’ next attacks, and it can be hard for organizations to see the upside.

Where do we go from here?

As cyber threats grow and evolve, organizations have been focused on the “what.” We dwell on what information we’d be willing to share, knowing that adversaries have the power to use that information against us.

Perhaps what we should be doing is focusing more on the “how.” How do we share information in a way that empowers organizations to collectively fight back?

While I don’t have all the answers, here are a few thoughts for consideration:

What if threat feeds were only machine-to-machine accessible? For instance, if the threat intelligence was shared in a machine-readable format to a SIEM, then only the companies that have detection systems could use that information. It's unlikely that a scammer is going to buy an expensive system to check their work. The question then becomes about how those systems report back to the people in charge of them.

What if we could completely de-couple the organization from the threat, and there was no way to associate the two? There is a fine line between how much information is needed to be useful, and how much puts an organization’s concealed identity at risk. Still, cracking this code would allay many fears about providing intelligence back to the mother ship.

What if opt-in isn’t the right way to go about information sharing? When companies need to opt in to share data, overabundance of caution may get the best of them. On the other hand, if they’re already sharing information (albeit anonymously), would that change their mindset?

Each of these questions are deeper discussions in themselves. Please share your thoughts in the comments.

Related Content: 

Nathan has written extensively about the intersection of collaboration and security, focusing on how businesses can keep information safe while accelerating the pace of sharing and collaborative action. For 10 years, Nathan has taken on marketing leadership roles in ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/10/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
Researcher Finds New Office Macro Attacks for MacOS
Curtis Franklin Jr., Senior Editor at Dark Reading,  8/7/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-13295
PUBLISHED: 2020-08-10
For GitLab Runner before 13.0.12, 13.1.6, 13.2.3, by replacing dockerd with a malicious server, the Shared Runner is susceptible to SSRF.
CVE-2020-6070
PUBLISHED: 2020-08-10
An exploitable code execution vulnerability exists in the file system checking functionality of fsck.f2fs 1.12.0. A specially crafted f2fs file can cause a logic flaw and out-of-bounds heap operations, resulting in code execution. An attacker can provide a malicious file to trigger this vulnerabilit...
CVE-2020-6145
PUBLISHED: 2020-08-10
An SQL injection vulnerability exists in the frappe.desk.reportview.get functionality of ERPNext 11.1.38. A specially crafted HTTP request can cause an SQL injection. An attacker can make an authenticated HTTP request to trigger this vulnerability.
CVE-2020-8224
PUBLISHED: 2020-08-10
A code injection in Nextcloud Desktop Client 2.6.4 allowed to load arbitrary code when placing a malicious OpenSSL config into a fixed directory.
CVE-2020-8229
PUBLISHED: 2020-08-10
A memory leak in the OCUtil.dll library used by Nextcloud Desktop Client 2.6.4 can lead to a DoS against the host system.