The U.S. Department of Homeland Security deployed its Automated Indicator Sharing (AIS) system in March to enable the exchange of cyber threat intelligence among private and public organizations. Their motivation is clear: to increase the breadth and speed of information sharing in order to help all types of organizations act more quickly and better defend themselves against emerging threats.
The concept of sharing information to fight common adversaries is nothing new. It’s similar to the Griffin Book or the blacklist used by casinos to identify known cheaters. The casinos share information on shady characters with the gaming board or Griffin Investigations, and they disseminate that information to all casinos so they can identify and ban cheaters. It's a great idea – share the intelligence and everybody (but the cheater) wins. And in the case of casinos fighting criminals in physical locations, it makes all the sense in the world.
There is almost unanimous agreement among security professionals that cyber threat information is similarly valuable to their organizations. However, digging deeper into the attitudes and implementation barriers to sharing that information unveils myths and significant reticence that make it a lot less simple than it might sound.
The known but evolving threat
In theory, the idea of sharing threat intelligence to fight cybercrime makes a lot of sense. But the problem in cybersecurity is that you're not dealing with known individuals, you’re dealing with anonymous adversaries that are capable of rapid change.
In casinos, if Dom “Roll Star” Spinale decided to get a new name and bag the aviator sunglasses, chances are he'd still be recognizable – and most likely sitting at a craps table. When you bring the same approach to the cybersecurity world, you don't have a photo, a name, a defined M.O. or any other clues as to who is on the other end. Instead, you have the type of attack (malware, phishing, ransomware), an IP range and maybe an email. So what you’re sharing are file hashes, IPs to block, and maybe known email addresses. All really important, but clearly not as easy to pinpoint as a specific person with a set of defined habits or physical features that are impossible to mask from facial recognition technology that many casinos use today.
Additionally, unlike in casinos, cyber threats are capable of multiplying and morphing at a breakneck pace. As everything shifts to digital, there’s a seemingly endless supply of new threats and possible attack vectors. Monitoring for and blocking known threats is something that every organization should be doing, but it’s not as effective as it once was. By the time a threat signature is identified and shared, there’s ample opportunity for it to be masked again. And if the bad guys are getting the same threat feed as everyone else – which happens – they have real-time visibility into whether they're getting caught and can quickly change their tactics. We’re giving them real-time feedback into what’s working and what’s not so they can adjust on-the-fly.
Without a doubt, threat intelligence sharing takes on a new level of complexity when we start talking about malicious systems instead of people. While many organizations want to benefit from collective threat intelligence, they are reticent about sharing back. This makes perfect sense when you consider the potential consequences of sharing attack information that could be connected back to a company. Were that to happen, you'd be broadcasting to the world that:
Add to that the fact that sharing threat intelligence could also be informing the bad guys’ next attacks, and it can be hard for organizations to see the upside.
Where do we go from here?
As cyber threats grow and evolve, organizations have been focused on the “what.” We dwell on what information we’d be willing to share, knowing that adversaries have the power to use that information against us.
Perhaps what we should be doing is focusing more on the “how.” How do we share information in a way that empowers organizations to collectively fight back?
While I don’t have all the answers, here are a few thoughts for consideration:
What if threat feeds were only machine-to-machine accessible? For instance, if the threat intelligence was shared in a machine-readable format to a SIEM, then only the companies that have detection systems could use that information. It's unlikely that a scammer is going to buy an expensive system to check their work. The question then becomes about how those systems report back to the people in charge of them.
What if we could completely de-couple the organization from the threat, and there was no way to associate the two? There is a fine line between how much information is needed to be useful, and how much puts an organization’s concealed identity at risk. Still, cracking this code would allay many fears about providing intelligence back to the mother ship.
What if opt-in isn’t the right way to go about information sharing? When companies need to opt in to share data, overabundance of caution may get the best of them. On the other hand, if they’re already sharing information (albeit anonymously), would that change their mindset?
Each of these questions are deeper discussions in themselves. Please share your thoughts in the comments.