A tremendous amount of energy is being spent on the harvesting, curation, distribution, and sharing of threat indicators and associated intelligence in the enterprise space.
The emergence of sharing groups and platforms, standards such as STIX/TAXII, reports of discovery of threat activity based on shared intelligence points, and multiple government mandates related to threat information sharing all point to the rapid maturity cycle that this space is experiencing. And while the process of delivering intelligence to enterprises requires continued focus to ensure incremental benefit for each new participant in the network, the systematic application of such intelligence is necessary to achieve the security outcomes we're all looking for. In this post, I'm going to explore the temporal nature of the application of indicators.
To put some definitions in place, I refer to the application of indicators (IP addresses, URLs, domains, MD5 hashes) to future activity as the prospective application of threat indicators. Correspondingly, the application of indicators to historical data such as log management and SIEMs is known as the retrospective application of threat indicators. Both of these techniques have value but occasionally in strikingly different ways, and this distinction is worthy of examination.
Prospective application is typically done in or near real time, such as when a data loss prevention (DLP) solution might look for IP theft in embedded content or for a specific user-agent string or signing certificate for SSL sessions. A match can result in a rapid response on the part of the enterprise, either automated through the security product or via the incident-response process.
But for all the virtues of the prospective process, by the time your sharing platform delivers indicators based on observations made elsewhere, it's likely that the specific malware or command-and-control infrastructure has already been used against you. Therefore, there's limited value in continuing to scan for indicators that are ephemeral such as file hashes or IP addresses. This is the conundrum that David Bianco talks about in his "Pyramid of Pain" theory.
However, there is considerable value in being able to look backward through the retrospective application of indicators. Typically, stored historical data isn't quite as rich, and there are trade-offs that have to be made in terms of the nature and duration of the data that gets stored. For example, your options on the network range from full-packet capture to simple firewall logs, and from hours to eternity. Modern security operations centers that "assume breach" are always interested in learning about recent encounters with the adversary, so the fact that a specific hash was observed in an email to a key executive a week ago is a clear signal that a campaign has begun or resumed.
As you venture into the world of threat intelligence and indicator sharing, you'll want to consider optimizations. This is true across the spectrum, whether you happen to be a producer, distributor, or consumer of threat intelligence, or even the provider of the technology that enables the operationalization of data. Enterprises should be evaluating their providers with these objectives in mind -- for example, demanding the ability to apply rich indicators to historical events.
Better outcomes will be achieved when we're applying temporal considerations to threat indicators that are distributed and operationalized.