Establishing 'normal' behaviors, traffics, and patterns across the network makes it easier to spot previously unknown bad behavior

While so much time in network security is spent discussing the discovery of anomalies that can indicate attack, one thing that sometimes gets forgotten in the mix is how fundamental it is to first understand what "normal" looks like. Establishing baseline data for normal traffic activity and standard configuration for network devices can go a long way toward helping security analysts spot potential problems, experts say.

"There are so many distinct activities in today's networks with a high amount of variance that it is extremely difficult to discover security issues without understanding what normal looks like," says Seth Goldhammer, director of product management for LogRhythm.

Wolfgang Kandek, CTO of Qualys, agrees, stating that when IT organizations establish baseline data, it makes it easier to track deviations from that baseline.

"For example, if one knows that the use of dynamic DNS services is at a low 0.5 percent of normal DNS traffic, an increase to 5 percent is an anomaly that should be investigated and might well lead to the detection of a malware infection," Kandek says.

[Are you using your human sensors? See Using The Human Perimeter To Detect Outside Attacks.]

But according to Goldhammer, simply understanding normal can be a challenge in its own right. Baselining activities can mean tracking many different attributes across multiple dimensions, he says, which means understanding normal host behavior, network behavior, user behavior, and application behavior, along with other internal information, such as the function and vulnerability state of the host. Additionally, external context -- such as reputation of IP -- plays a factor.

"For example, on any given host, that means understanding which processes and services are running, which users access the host, how often, [and] what files, databases, and/or applications do these users access," he says. "On the network [it means] which hosts communicate to which other hosts, what application traffic is generated, and how much traffic is generated."

It's a hard slog, and, unfortunately, the open nature of Internet traffic and diverging user behavior make it hard to come up with cookie-cutter baseline recommendations for any organization, experts say.

"Networks, in essence, serve the needs of their users. Users are unique individuals and express their different tastes, preferences, and work styles in the way they interact with the network," says Andrew Brandt, director of threat research for the advanced threat protection group for Blue Coat Systems. "The collection of metadata about those preferences can act like a fingerprint of that network. And each network fingerprint is going to be as unique as its users who generate the traffic."

Another added dimension to developing baseline is time. The time range for sampling data for establishment of a benchmark will often depend on what kind of abnormality the organization hopes to eventually discover.

"For example, if I am interested in detecting abnormal file access, I would want a longer benchmark period building a histogram of file accesses per user over the previous week to compare to current week, whereas if I want to monitor the number of authentication successes and failures to production systems, I may only need to benchmark the previous day compare to the current day," Goldhammer says.

While baselines can be useful for detecting deviations, TK Keanini, CTO of Lancope, warns that it may actually be useful to think in terms of pattern contrasts rather than "normal" and "abnormal."

"The term 'anomaly' is used a lot because people think of pattern A as normal and patterns not A as the anomaly, but I prefer just thinking about it as a contrast between patterns," Keanini says. "Especially as we develop advanced analytics for big data, the general function of 'data contrasts' deliver emergent insights."

This kind of analysis also makes it less easy to fall prey to adversaries who understand how baselines can be used to track deviations. Instead of a single, static baseline, advanced organizations will constantly track patterns and look for contrasts across time.

"The adversary will always try to understand the target norms because this allows them to evade detection," he says. "Think about how hard you make it for the adversary when you establish your own enterprise wide norms and change them on a regular basis."

However it is done, when a contrast of patterns does flag those tell-tale anomalies, Kandek recommends that immediate analytical response should be organized.

"To deal with network anomalies, IT departments can lean on a scaled-down version their incident response process," he says. "Have a team in place to investigate the anomalies, document the findings, and take the appropriate actions, including adapting the baselines or escalating to a full-blown incident response action plan."

Foremost in that immediate action is information-sharing, Brandt recommends.

"When you identify the appropriate parameters needed to classify traffic from the "unknown" to the "known bad" column, it's important to share that information, first internally to lock down your own network, and then more widely, so others might learn how they can detect anything similar on their own networks," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Ericka Chickowski, Contributing Writer

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights