There is much to be learned from the striking parallels between counter-terrorism threat analysis before 9-11 and how we handle cyber threat intelligence today.

Paul Kurtz, Chief Cybersecurity Adviser, Splunk Public Sector

April 26, 2017

6 Min Read
Image Source: Adam Parent via Shutterstock

When it comes to threats in cyberspace, is it fair to say “what’s past is prologue”?

Maybe.

Former CIA Director George Tenet’s statement less than two months before 9-11 that “the system was blinking red” is eerily familiar to our current threat environment in cyberspace. We have a preponderance of reporting on adversaries but the availability of specific, actionable detail is sparse.

This is not a prediction of a “cyber 9-11” but rather identification of the striking parallels between how we approached counter-terrorism threat analysis before 9-11 to how we handle cyber threat intelligence today. Our approach to cyber threat intelligence is broken.

Before Everything Changed
As a member of the counter terrorism team at the White House for two years leading up to 9-11, we had more than a sinking suspicion that the most important intelligence about al-Qaeda’s attack plans was kept inside the walls of our own intelligence agencies. During daily video conferences with FBI, NSA, and CIA, I was told certain reporting details could not be shared with all of the participants because of source sensitivity, legal constraints, or bureaucratic turf wars. It was disturbing and disastrous, as we know what ultimately happened. Critical data —including information on the hijackers’ pilot training classes— remained unavailable to other agencies.

On the counter terrorism team we had extensive access to terrorism reporting, but as documented in the 9-11 Commission’s report, the team did not have access to “internal, non-disseminated information at the NSA, CIA, or FBI.” While agencies were charged to work together, in reality, each worked independently to gather and assess threat data while withholding certain details from each other, failing to understand the dangers of non-disclosure.

The challenge that we faced then —and now— is how to gain access to what is really happening inside company networks.

What’s the Same?

  • The most important data remains inside organizations. Before 9-11, we understood al-Qaeda was a threat, but we did not have access to specific details, which if fused could have shed light on the plot underway to launch the attacks. Today we know that Russia, Iran, North Korea, and China and criminal organizations represent a serious threat, yet the specific details of tactics, techniques, and procedures (TTP) they use to gain access to systems remain closely held. For example, consider the email hacks against the DNC that were attributed to Russia during the 2016 election called Grizzly Steppe. The U.S. government’s first release of Grizzly Steppe information on December 29 was not useful because it lacked context. After the security community voiced concern, the government released additional information providing more context. Individual organizations are aware of TTP, but are unwilling to release data in a timely way because it’s seen as too risky from a market perspective.

  • The system is blinking red. There was a drumbeat of intelligence in the summer of 9-11 with reporting presented to top officials on Bin Ladin launching attacks in the U.S., India, Israel, Italy, and the Gulf. Analysts could barely keep pace with the reporting. Today, similarly, data on cybersecurity threats is continually growing, as is the frequency and severity of attacks. The “blinking red” analogy is an apt description of the situation at Target prior to the attack in 2013 and several more since, illustrating a race to the bottom with an endless offering of threat data - much of which is not timely, actionable or relevant.

  • No common situational awareness. Our current picture of cyberspace is strikingly similar to the pre-9-11 environment. Much like each intelligence agency having its own view prior to 9-11, we have a company-centric view of cyberspace. It is necessary but not sufficient to self-select into sector-specific sharing when we know that adversaries use the same tools and infrastructure to strike multiple sectors. 

What’s Different?

  • Government can’t help. In the case of counterterrorism, government has the mandate, authority, and resources to track and address the threat. This is not the case in cyberspace. Government’s ability to act is limited. Government agencies are unaware of the attacks occurring on a daily basis inside companies. Companies assume that the U.S. government can provide “tip off,” when, in fact, the private sector may possess the most useful data and either not know it, or be unable to share it or access it effectively.

  • Adversaries are more plentiful. There are numerous terrorist organizations in existence today, and unfortunately, the number of cyber adversaries are more plentiful. Adversaries range from terrorists themselves to hacktivists, criminals, and nation states. Their motives vary and they can easily mask their identities, obfuscate attribution, or piggyback on the work of others. We learned from the recent Wikileaks Vault 7 dump that the CIA’s alleged “Marble Framework” has obfuscation technology that can make it appear that an attack has come from elsewhere.

  • Doing more damage with less. Adversaries have an asymmetric advantage as they leverage computers to do their work for them from afar and need only find one way in to render significant damage ranging from the theft and destruction of data. They are using software to increase their speed, reach, and returns. They share attack infrastructure as well.

Change is Necessary NOW
Avoiding large scale disasters in cyberspace requires a shift in thinking. While individual companies are responsible for securing themselves, it is no longer possible for any one company to “go it alone” and defend itself without real-time insight of what attacks are happening against others.

The current landscape of threat intelligence platforms (TIPs) and tools can assist with the aggregation of external threat feeds from thousands of open source feeds or proprietary intelligence providers inside an organization. But this siloed approach creates a noisy false sense of security, and does little to protect or incentivize actual intelligence exchange and collaboration across teams, tools, and companies. These platforms lack the technology needed to scale real-time exchange between companies that can discern market risk, and identify what has immediate value to security operators.

While the government is hamstrung by bureaucracy and regulations, the private sector has the imperative to determine its own destiny when it comes to threat intelligence sharing. This isn’t a pipe dream; we’re seeing organizations like the Cloud Security Alliance and OASIS take steps towards this new era of intelligence exchange today.

We must continue to lay the groundwork for a secure exchange network across the private sector so that we can avoid future large-scale hacks.

Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 & 16, where Dark Reading editors and some of the industry's top cybersecurity experts will share the latest data security trends and best practices.

Related Content:



 

About the Author(s)

Paul Kurtz

Chief Cybersecurity Adviser, Splunk Public Sector

Paul Kurtz is an internationally recognized expert on cybersecurity and a co-founder of TruSTAR and now is the Chief Cybersecurity Adviser of Splunk's Public Sector business. Paul began working on cybersecurity at the White House in the late 1990s where he served in senior positions relating to critical infrastructure and counterterrorism on the White House's National Security and Homeland Security Councils.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights