Threat Intelligence
08:00 AM
Connect Directly

How To Share Threat Intelligence Through CISA: 10 Things To Know

If you want those liability protections the Cybersecurity Information Sharing Act promised, you must follow DHS's new guidelines.

Share information about breaches, attacks, and threats with the federal government without fear of the legal repercussions -- that's the alluring promise of the Cybersecurity Information Sharing Act (CISA, passed as the Cybersecurity Act of 2015). However, those liability protections do not apply to any and all sharing, so if you want to be safe from litigation, you must share information through the guidelines recently released by the US Department of Homeland Security.

Security and privacy professionals alike were anxiously awaiting these guidelines because they answer some of the pervading questions about how privacy would be protected when CISA passed. They also provide some instructions -- particularly for non-federal entities -- on precisely how to conduct their information sharing activities under the new law.

Here's what you need to know.

1. You need to remove individuals' personal data before sharing it.

The guidelines require that, before sharing data, an organization remove any information "that it knows at the time of sharing" to be personally identifiable information of "a specific individual that is not directly related to a cybersecurity threat."

If you don't do that, you won't get liability protection.

The guidelines acknowledge that there may be occasions when PII is "directly related," such as in a social engineering attack. However, sometimes those individuals' relevant characteristics can be shared (job title, for example), but anonymized first.

"The DHS Guidance does a decent job of explaining what ['directly related'] means, but I believe there is still a lot left to subjective decision making by the company doing the sharing," says Jason Straight, chief privacy officer of UnitedLex, and speaker at the upcoming Interop Las Vegas conference. "If they make a 'bad call,' and share something they shouldn’t have, what happens? Do they not get liability protection? Who decides?"

Straight also points out that this requires that organizations put in place people, processes, and technology they might not have had before. 


2. The personal data you need to remove may be more extensive than you think.

The guidelines provide a list of private data types that are protected by regulation, are unlikely to be directly related to a cybersecurity threat, and should therefore be on your watch list when scrubbing.

That list includes not just your basic PII and personal health information, but human resource information (including performance reviews and such), consumer information protected by the Fair Credit Reporting Act, education history protected by the Family Educational Rights and Privacy Act, financial information including investment advice protected by the Gramm-Leach-Bliley Act, identifying information about property ownership (like vehicle identification numbers), identifying information about children under 13 protected by the Children's Online Privacy Protection Act.


3. Be particularly careful of Europeans' personal data.

European privacy laws to protect personal data are much more rigorous than American ones, and the divide is only getting wider. As we've explained before

The EU General Data Protection Regulation (GDPR), a replacement for the EU Data Protection Directive, is expected to be ratified by European Parliament this spring session, and go into effect by 2018. The GDPR will expand the definition of "personal data" to "encompass other factors that could be used to identify an individual, such as their genetic, mental, economic, cultural or social identity," according to IT Governance. ...

So, data on Europeans' shoe sizes and political affiliations and more may be protected.

Violations of GDPR have proposed fines of up to 4% of annual global revenue. Many breaches of personal data must be reported within 72 hours of discovery. So, it's no small issue when the data is misused or lost. 

Plus, the newly proposed trans-Atlantic data transfer agreement, EU-US Privacy Shield, if passed, will create a host of new regulations about how the US is permitted to handle data, and what European citizens' legal rights are in the event that Americans violate their rights. 

You're better off upping your data classification game, and avoid sharing European citizens' data at all through CISA.


4. If you want liability protection, share with DHS or ISACs and not other federal agencies.

Liability protection is only given when you share information with DHS’s National Cybersecurity and Communications Integration Center (NCCIC) -- the official hub for the sharing of cyber threat indicators between the private sector and the federal government -- or with the industry ISACs (like FS-ISAC) that will pass the data onto DHS.

Again, this only happens if the data is scrubbed of personal information before you share it. 

CISA does allow you to share cyber threat indicators with other federal agencies, "as long as ... the sharing is conducted for a cybersecurity purpose," but you will not get the liability protections.


5. DHS scrubs it of personal information too, but...

DHS will review all threat data submitted and -- with automatic and manual means -- remove any remaining pieces of personal information before sharing it with any other agencies.

So, no data submitted will go to waste; but you won't get the liability protection.

Plus, there is a privacy issue, considering that one federal agency (DHS) has already seen information that it should not have. 

CISA does, however, require federal entities to notify, "in a timely manner, any United States person whose personal information is known or determined to have been shared in violation of CISA."

That notification is only required for US persons, according to CISA, but "as a matter of policy, DHS extends individual notification to United States and non-United States persons alike in accordance with its Privacy Incident Handling Guidelines."


6. Joining AIS and building a TAXII client makes all this easier.

All that data scrubbing might sound like a nightmare! Who would bother sharing anything at all? Luckily, DHS NCCIC has automated and standardized the process to make it less painful.

The Automated Indicator Sharing (AIS) initiative allows organizations to format and exchange threat indicators and defense measures in a standardized way, using standard technical specifications that were developed to satisfy CISA's private data scrubbing requirements. 

The Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII) are standards for data fields and communication, respectively. OASIS now manages the specs.

To share threat info, AIS participants acquire their own TAXII client, which communicates with the DHS TAXII server. As a DHS representative explained in a statement to Dark Reading:

"A TAXII client can be built by any organization that wishes to do so based on the TAXII specification ( DHS has built an open-source TAXII client for any organization that would like to use it free of charge, or incorporate the code into their existing systems. In addition, there are a number of commercially available products that incorporate TAXII connectivity. A list can be found at" 

To date, four federal agencies and 50 non-federal entities have signed up for AIS.


7. There are other ways to share indicators with, too.

Threat info can also be shared with DHS via:


8. There are rules government agencies must follow, and punishments if they don't.

The federal agencies that receive the data shared through CISA must follow certain operational procedures that moderate authorized access and ensure timely dissemination of threat data. From the interim procedure document: 

Failure by an individual to abide by the usage requirements set forth in these guidelines will result in sanctions applied to that individual in accordance with their department or agency’s relevant policy on Inappropriate Use of Government Computers and Systems. Penalties commonly found in such policies, depending on the severity of misuse, include: remedial training; loss of access to information; loss of a security clearance; and termination of employment.


9. There are still privacy concerns.

Although the list of privacy-related laws mentioned in section 2 above might seem pretty extensive, Jadzia Butler of the Center for Democracy and Technology pointed out:

...the list does not include the Electronic Communications Privacy Act (ECPA) or the Wiretap Act – the two laws most likely to be “otherwise applicable” to information sharing authorized by the legislation because they prohibit (with exceptions) the intentional disclosure of electronic communications.

Another question: what will agencies do with all that data once they have it? Will it only be for cybersecurity purposes, or what?

The CISA and DHS Privacy and Civil Liberties Interim Guidelines state specifically how the federal government can make use of the information. Other uses are expressly prohibited, but some privacy experts say the language itself is not prohibitive enough and the official privacy impact assessment (published here) says "Users of AIS may use AIS cyber threat indicators and defensive measures for purposes other than the uses authorized under CISA."

As is, the uses permitted by CISA extend beyond direct cybersecurity attacks. They may also use the submitted information for the purposes of: responding to, preventing, mitigating a specific threat of death, serious bodily harm, serious economic harm, terrorist act, or use of weapon of mass destruction; responding to, investigating, prosecuting, or preventing a serious threat to a minor, including sexual exploitation or physical threats to safety for preventing, investigating, disrupting, or prosecuting espionage, censorship, fraud, identity theft, or IP theft.

As Butler wrote:

For example, even under these guidelines, information shared with the federal government for cybersecurity reasons could be stockpiled and mined for use in unrelated investigations of espionage, trade secret violations, and identity theft.

Without additional limitations, the information sharing program could end up being used as a tool by law enforcement to obtain vast swaths of sensitive information that could otherwise be obtained only with a warrant or other court order. In other words, privacy advocates’ warnings that CISA is really a surveillance bill dressed in cybersecurity clothing may still come to fruition.


10. The liability protections themselves aren't entirely clear.

"The liability protection is fairly broad but not clear that it includes protection from disclosure through litigation process (discovery requests) or subpoenas," says Straight. "The big risk there, in my view, is that it would be potentially possible to use the fact that a breached company shared threat intel under CISA as evidence of when a company was aware of a threat or incident. This could become part of a broader claim by a plaintiff that the breached company did not do enough to mitigate or respond effectively to the incident."

Sharing threat data isn't the only thing that may come with risks; simply receiving threat feeds via AIS could have legal risks, according to Straight.

"An organization that receives threat feeds should be prepared to take on the burden of assessing the threats and responding appropriately," he says. "This will create a burden on the receiving organization that did not exist before. Also, I believe there is some risk in receiving threat data that you are not equipped to act upon. Again, it is conceivable that the fact that you received 'notice of a threat through threat sharing, did nothing, and were then compromised by that threat could be used against you in a litigation or even a regulatory action."

So, sharing is caring, but do it carefully.

"I should say that I am in favor of threat intel-sharing," says Straight, "but any organization seeking to do so should make sure it understands what it is getting into and can support an ongoing threat intelligence consumption, production, and sharing process. In my view, none of the [government] documents or commentary I’ve seen so far, including DHS Guidance, sufficiently addresses the issues I have raised."  

Straight will present "Avoiding Legal Landmines Surrounding Your IT Infrastructure: Policies and Protocols" at Interop Las Vegas May 4.


Related Content:


Interop 2016 Las VegasFind out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Click here for pricing information and to register.

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ... View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Strategist
4/12/2016 | 11:55:17 AM
Check your sources!
Why is UnitedLex a source for cybersecurity?  They're a firm that does e-discovery.  It looks like UntiredLex hired a lawyer who can maybe try to talk law firms out of more billable hours for cyber work, too.  This truly is a crowded field. Biggest threat to organizations is supposed cyber "experts". Good grief.
User Rank: Apprentice
3/28/2016 | 11:32:19 AM
Europeans' personal data?
Be particularly careful of Europeans' personal data. But screw your own countrymen.
Joe Stanganelli
Joe Stanganelli,
User Rank: Ninja
3/28/2016 | 9:35:27 AM
It's also worth pointing out that there's nothing stopping new laws or regulations broadening the allowable use of this stockpiled information -- regardless of what the rules are now.

(And, of course, this is to say nothing about the idea of agencies breaking the rules that govern them.)
Register for Dark Reading Newsletters
White Papers
Current Issue
Five Emerging Security Threats - And What You Can Learn From Them
At Black Hat USA, researchers unveiled some nasty vulnerabilities. Is your organization ready?
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio
Archived Dark Reading Radio
According to industry estimates, about a million new IT security jobs will be created in the next two years but there aren't enough skilled professionals to fill them. On top of that, there isn't necessarily a clear path to a career in security. Dark Reading Executive Editor Kelly Jackson Higgins hosts guests Carson Sweet, co-founder and CTO of CloudPassage, which published a shocking study of the security gap in top US undergrad computer science programs, and Rodney Petersen, head of NIST's new National Initiative for Cybersecurity Education.