Slow Your Roll Before Disclosing a Security IncidentTransparency rules, but taking the right amount of time to figure out what happened will go a long way toward setting the record straight.
Security incidents happen all the time, but when one actually strikes an organization, security professionals often find themselves uncertain whether it needs to be disclosed to the public or shared with law enforcement right away.
For example, back in 2006, an unknown user gained unauthorized access to a large number of electronic records on the McCombs School of Business computers at the University of Texas in Austin. But before disclosing what happened, it took the time to determine what information was accessed, lost, or possibly modified, as well as how long the issue had been going on. UT-Austin also needed to understand exactly what information about the security incident it was able to share with the public.
"UT-Austin said what happened, who it affected, what the potential impact was to the people involved, and what they were going to do to help these people," says Greg White, director of the Center for Infrastructure Assurance and Security at The University of Texas at San Antonio (UTSA).
Clearly the university understood that planning and preparation was critical for security teams to know what to do in the event of an incident. Planning should start when an application is first architected or when features are added, according to Tim Mackey, principal security strategist at Synopsys CyRC (Cybersecurity Research Center).
"It is at this point that decisions surrounding the type of data being collected and processed are made," Mackey says.
Once an application is architected, the incident response plan should be updated — at least in an ideal world. In reality, though, it might be time for organizations to dust off the old playbooks.
Do You Really Have a Breach?
Sometimes organizations will make efforts to keep security incidents tightly under wraps; however, that can backfire if the news gets out, as was the case for Uber in 2017 when it came out the company had paid $100,000 to conceal a 2016 data breach.
"What makes this one stand out is absolutely the time duration," McAfee Labs vice president Vincent Weafer told Dark Reading in 2017. "It's almost a year ago that the actual event occurred; we're just finding out about it now."
If a company is evasive and only discloses a little at a time, it could potentially come across as an effort to hide something or an indication that they don't really know what they are doing. "Neither of these are good from a public relations standpoint," UTSA's White says.
On the other hand, last last month Capital One announced that an unauthorized user had accessed customer data. The announcement went public only 10 days after the security incident was detected.
When companies haven't been transparent about potential compromises, the public is less inclined to be forgiving. That said, a lack of disclosure isn't always an indication that organizations are withholding information. According to Benjamin Wright, attorney and SANS senior instructor, it is very common for people to misinterpret evidence. In order to avoid damage to brand, he says, companies need to analyze with great vigor.
"In a modern enterprise, you can get thousands of alerts in a day, all giving some piece of information that there could be a problem. All of these little pieces of information are forms of evidence, which can be very hard to interpret," Wright says.
Often the tendency is to leap to conclusions, says Wright, who cautions companies to do their homework before reaching any legal conclusions. "They have to get the appropriate kinds of experts to really look at what happened and interpret them in a realistic way," he says.
Though it may feel frustrating to stakeholders who are anxious to know whether something happened and to what extent the business has been impacted, security teams must be thorough, UTSA's White says.
He also agrees that thorough analysis is critical. "It will take some time to get an accurate picture to be able to fully disclose what has happened," he says. "In some cases you can make a quick guess, but to get accurate information out it will take more time."
White adds: "Another thing [is to consider] how you might be enhancing security in the future to ensure this doesn't happen again."
We Know the What, but How Do We Disclose?
In the US, data breach reporting requirements vary by state and the type of data exfiltrated. For instance, the state of Connecticut mandates that breaches "based on harm" are disclosed within 90 days and require government notification. In South Carolina, though, breaches causing harm must be disclosed within the "most expedient time possible and without unreasonable delay." Law enforcement only needs to be notified if more than 1,000 residents have been affected.
According to Synopsys’ Mackey and "The Summary of US State Data Breach Notification Statutes" published by Davis Wright Tremaine, the reporting process is based on where the user resides, not where the organization's primary locations are.
"For national or global organizations, this significantly complicates any incident response as even US-based companies may do business with EU residents and thus potentially trigger GDPR requirements," Mackey explains.
Law enforcement can be called on to support an investigation, but Mackey says it's not reasonable to expect they'll be in a position to guide your full response.
Image Source: rnl via Adobe Stock
Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition's security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM's Security Intelligence. She has also contributed to several publications, ... View Full Bio