Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Edge Articles

08:00 AM
Kacy Zurkus
Kacy Zurkus
Edge Articles
Connect Directly

Slow Your Roll Before Disclosing a Security Incident

Transparency rules, but taking the right amount of time to figure out what happened will go a long way toward setting the record straight.

Security incidents happen all the time, but when one actually strikes an organization, security professionals often find themselves uncertain whether it needs to be disclosed to the public or shared with law enforcement right away.

For example, back in 2006, an unknown user gained unauthorized access to a large number of electronic records on the McCombs School of Business computers at the University of Texas in Austin. But before disclosing what happened, it took the time to determine what information was accessed, lost, or possibly modified, as well as how long the issue had been going on. UT-Austin also needed to understand exactly what information about the security incident it was able to share with the public. 

"UT-Austin said what happened, who it affected, what the potential impact was to the people involved, and what they were going to do to help these people," says Greg White, director of the Center for Infrastructure Assurance and Security at The University of Texas at San Antonio (UTSA).

Clearly the university understood that planning and preparation was critical for security teams to know what to do in the event of an incident. Planning should start when an application is first architected or when features are added, according to Tim Mackey, principal security strategist at Synopsys CyRC (Cybersecurity Research Center).

"It is at this point that decisions surrounding the type of data being collected and processed are made," Mackey says.

Once an application is architected, the incident response plan should be updated — at least in an ideal world. In reality, though, it might be time for organizations to dust off the old playbooks.

Do You Really Have a Breach?
Sometimes organizations will make efforts to keep security incidents tightly under wraps; however, that can backfire if the news gets out, as was the case for Uber in 2017 when it came out the company had paid $100,000 to conceal a 2016 data breach.

"What makes this one stand out is absolutely the time duration," McAfee Labs vice president Vincent Weafer told Dark Reading in 2017. "It's almost a year ago that the actual event occurred; we're just finding out about it now."

If a company is evasive and only discloses a little at a time, it could potentially come across as an effort to hide something or an indication that they don't really know what they are doing. "Neither of these are good from a public relations standpoint," UTSA's White says.

On the other hand, last last month Capital One announced that an unauthorized user had accessed customer data. The announcement went public only 10 days after the security incident was detected.

When companies haven't been transparent about potential compromises, the public is less inclined to be forgiving. That said, a lack of disclosure isn't always an indication that organizations are withholding information. According to Benjamin Wright, attorney and SANS senior instructor, it is very common for people to misinterpret evidence. In order to avoid damage to brand, he says, companies need to analyze with great vigor.

"In a modern enterprise, you can get thousands of alerts in a day, all giving some piece of information that there could be a problem. All of these little pieces of information are forms of evidence, which can be very hard to interpret," Wright says.

Often the tendency is to leap to conclusions, says Wright, who cautions companies to do their homework before reaching any legal conclusions. "They have to get the appropriate kinds of experts to really look at what happened and interpret them in a realistic way," he says.

Though it may feel frustrating to stakeholders who are anxious to know whether something happened and to what extent the business has been impacted, security teams must be thorough, UTSA's White says. 

He also agrees that thorough analysis is critical. "It will take some time to get an accurate picture to be able to fully disclose what has happened," he says. "In some cases you can make a quick guess, but to get accurate information out it will take more time."

White adds: "Another thing [is to consider] how you might be enhancing security in the future to ensure this doesn't happen again."

We Know the What, but How Do We Disclose?
In the US, data breach reporting requirements vary by state and the type of data exfiltrated. For instance, the state of Connecticut mandates that breaches "based on harm" are disclosed within 90 days and require government notification. In South Carolina, though, breaches causing harm must be disclosed within the "most expedient time possible and without unreasonable delay." Law enforcement only needs to be notified if more than 1,000 residents have been affected.

According to Synopsys’ Mackey and "The Summary of US State Data Breach Notification Statutes" published by Davis Wright Tremaine, the reporting process is based on where the user resides, not where the organization's primary locations are.

"For national or global organizations, this significantly complicates any incident response as even US-based companies may do business with EU residents and thus potentially trigger GDPR requirements," Mackey explains. 

Law enforcement can be called on to support an investigation, but Mackey says it's not reasonable to expect they'll be in a position to guide your full response.

Related Content:

Image Source: rnl via Adobe Stock


Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition's security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM's Security Intelligence. She has also contributed to several publications, ... View Full Bio


Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
8/10/2019 | 3:51:02 PM
Re: Blase about the B-Word
Hmm, interesting, by definition the word breach means:

"A security breach is any incident that results in unauthorized access of data, applications, services, networks and/or devices by bypassing their underlying security mechanisms. A security breach occurs when an individual or an application illegitimately enters a private, confidential or unauthorized logical IT perimeter." - Definition Reference

 So by definition, whether it is "Security Breach", "Breach", "Security Incident", it all boils down to "unauthorized" access to resources that are managed by an authorized controlling party. At the end of the day, what difference does it make because the party who experiences the attack, have been affected by this incident and irreparable damage to their reputation?

The significant problem is that people feel they have to access resources that they don't own, I am not sure why they feel they have to take such major steps to try and access systems, this is bigger than just the use of words, this focuses attention on the true nature of people and the dark side of man. The "Psychology of the Perpetrator" or "Criminal" is what we need to focus on, what makes them tick?

Just a thought for the day.

All Links Are Safe ... Right?

Source: Mimecast

What security-related videos have made you laugh? Let us know! Add them to the Comments section or email us at [email protected].

Name That Toon: Tough Times, Tough Measures
Latest Comment: Wear a mask, please!
Flash Poll