Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


08:43 PM
Connect Directly

5 Considerations For Post-Breach Security Analytics

Preparing collection mechanisms ahead of time, preserving chain of custody on forensics data, and performing focused analysis all key in inspecting security data after a compromise

Some of the most important security analytics tasks that organizations perform is done with the pressure of a running clock and exacting standards for how data is preserved and manipulated. Unlike day-to-day log analysis, post-breach inspection of security data requires special considerations in the collection and handling of information following a compromise.

1. Collecting Relevant Data
The importance of the clock ticking in a compromise situation is one of the most crucial to remember when conducting analytics on forensic data, for two reasons. First, investigators need to figure out what went wrong in order to stop active compromise situations and prevent further damage from occurring. Second, minimizing the breach notification window with ample public information is crucial from a regulatory, legal, and PR perspective.

"When a breach has been detected, it's really important to have instant visibility from multiple viewpoints because you need to actually understand the breach, scope out the damage, and remediate," says Lucas Zaichkowsky, enterprise defense architect for AccessData.

Some of the types of data that can come into play within a forensic analysis include log files from multiple sources, information on affected endpoints, such as structured file data or data in memory, as well as volatile data, such as open network connections or running processes on systems, says J.J. Thompson, managing partner at Rook Consulting.

"You're going to want to collect anything that is in scope for the incident, so you're going to want to make sure you collect all of the system logs, database logs, and network logs that you can possibly get your hands on," he says, "and make sure that those are accessible and available for future analytics. That's step one."

Depending where an initial log review starts to lead the incident response team, that's where deeper collection of data within host logs will occur. This is in contrast to standard security operations analytics, where host data happens "significantly less frequently," Thompson says.

[How do you know if you've been breached? See Top 15 Indicators of Compromise.]

2. Make Data Collection A Possibility
Unfortunately, many organizations struggle to gain timely visibility into security data because they didn't prepare enough data collection mechanisms in advance of the incident to offer that immediate lens into what happened within the infrastructure impacted by a breach.

"A lot of time, people will find out what they need to collect once they see the indicators of compromise and realize that collecting that information from then on is kind of a moot point," says Chris Novak, global managing principal of investigative response for Verizon Enterprise Solutions, who recommends that organizations test themselves with mock incidents and walk through a collection scenario before their hair is on fire. "A mock incident is a way to really have those teachable moments as to what exactly it is that you need to be prepared for."

In addition to shortfalls in data collection mechanisms, the mock incident could uncover a frequently lacking piece of foundational information: namely, an up-to-date network diagram. Novak says he is frequently surprised by how many organizations might have a fully detailed rendering of the physical building a data center is hosted in while lacking a network map counterpart.

3. Preserve Data For Longer Than You Think You'll Need It
As organizations think about what types of data to routinely collect, they should also be mindful of keeping it long enough as a precautionary measure to allow for a lengthy enough backward look at the data to pinpoint the initial compromise. According to Zaichkowsky, the longest time he has witnessed between an initial discovery of compromise and forensics trail to initial infiltration of "victim zero" was 456 days.

"That's a long attack life cycle that they need to be able to reconstruct what happened," he says.

As a rule of thumb, Zaichkowsky recommends organizations retain at least a year's worth of relevant log data, with three months' worth online and ready to search at a moment's notice.

In addition to this precautionary groundwork, once a breach has been discovered, those retention windows on the in-scope data should lengthen considerably. After an investigation is complete, organizations should secure and archive that collected data in case it is needed for a rainy day. That could mean for legal purposes, but also in case that compromise went deeper than initially thought.

"A lot of times companies will go through the process, remediate, and then when they find three months later the attack was resumed, they realize the attacker is still in the system, but all of the relevant data was deleted after the investigation," he says.

4. Establish A Chain Of Custody
As Zaichkowsky mentioned, analytics of forensics data will lead to inspection of data that's rarely looked into on a day-to-day basis. As an investigation team digs into a collection of volatile and legally sensitive data, they must not only think about preservation of data that will lead to swift mitigation of risk, but also preservation of evidence in a legally admissible way.

"Things typically start with the preservation of the evidence: not powering off systems so we can collect volatile data and maintaining a proper chain of custody," Novak says.

Establishing chain of custody is an imperative for cases where litigation or legal proceedings could occur. The key is being able to document how data was obtained, by whom, when it was obtained, and maintaining the integrity of the data state to prove it was never tampered with during the investigation process, Thompson says.

"It's really about making sure that you can show counsel that this evidence was obtained using forensically sound mechanisms, it was not altered, and you have that evidence available for opposing counsels, advisers, consultants, and experts to analyze it there themselves and see if they come to the same conclusions," he says.

Typically, the best practice is to pull the entire binary or data in full, duplicate it, and keep a hashed copy prior to running analytics on the working copy of data in order to show it hasn't been altered in any way, Zaichkowsky says.

5. Go Down The Rabbit Hole Without Getting Lost
With evidence bagged and tagged and data ready for analysis, the hard work still awaits investigators who must roll up their sleeves and inspect the data. While the mantra for forensics collection of data is to collect as much as you can that could tie to the incident, that scope needs to be tightened once it is time to run analysis.

"Usually what happens is you have massive scope creep and an overconsumption of that forensics data -- you collect so much you feel like you have to analyze the same amount," says Novak, who instead recommends customers use an "evidence-based" approach to the investigation. "How did you recognize the problem? Start there and only expand as much as you need."

Thompson agrees, stating that organizations should let the indicators of compromise lead the investigation into the paths of analysis. One way he gets his analysts to tighten their focus is to go through an exercise where they literally draw a box on a piece of paper and write out the components that led them to believe there was a compromise. The idea is to draw out lines and start brainstorming within that box similar to how a detective would work through evidence in a physical crime case. With that picture in front of them, it is easier for analysts to list the investigative techniques to start with so they can jump down potential rabbit holes without getting lost.

"That really helps them keep on track so that they don't end up veering off course," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
US Formally Attributes SolarWinds Attack to Russian Intelligence Agency
Jai Vijayan, Contributing Writer,  4/15/2021
Dependency Problems Increase for Open Source Components
Robert Lemos, Contributing Writer,  4/14/2021
FBI Operation Remotely Removes Web Shells From Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/14/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-04-20
A vulnerability of Helpcom could allow an unauthenticated attacker to execute arbitrary command. This vulnerability exists due to insufficient authentication validation.
PUBLISHED: 2021-04-20
vscode-restructuredtext before 146.0.0 contains an incorrect access control vulnerability, where a crafted project folder could execute arbitrary binaries via crafted workspace configuration.
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to an authenticated stored cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed....
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to multiple reflected cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed. Only...
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** AdTran Personal Phone Manager 10.8.1 software is vulnerable to an issue that allows for exfiltration of data over DNS. This could allow for exposed AdTran Personal Phone Manager web servers to be used as DNS redirectors to tunnel arbitrary data over DNS. NOTE: The aff...