Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

10/15/2015
11:00 AM
Joshua Goldfarb
Joshua Goldfarb
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
50%
50%

An Atypical Approach To DNS

It's now possible to architect network instrumentation to collect fewer data sources of higher value to security operations. Here's how -- and why -- you should care.

The idea for this column started when a colleague asked me a few questions about analyzing metadata generated from network traffic. We discussed a few possibilities, including various different types of metadata. A few minutes after our call, I recalled an email thread I had seen discussing ways to collect metadata about DNS transactions. Most of the solutions offered on that thread involved deploying new equipment to collect additional telemetry data. 

Suffice it to say, based on the discussions I have with customers, yet another piece of equipment to deploy, manage, and maintain is not high on their wish list. On the other hand, DNS data is extremely valuable for security operations and incident response. So how can we rectify this discrepancy? Perhaps taking an atypical approach to DNS is the answer.

Almost five years ago, I gave a talk at the GFIRST conference entitled "Uber Data Source: Holy Grail or Final Fantasy?" In this talk, I proposed that given the volume and complexity that a larger number of highly specialized data sources brings to security operations, it makes sense to think about moving towards a smaller number of more generalized data sources.  

One could also imagine taking this concept further, ultimately resulting in an "uber data source." Or, how about a more detailed discussion working within the context of network traffic data sources?  I consider host (AV logs), system (Linux syslogs), and/or application level (web server logs) data sources beyond the context of what I would consider generalizing into an uber data source, at least for network traffic data and meta-data.

Begin at the beginning

Let's start by looking at the current state of security operations in most organizations, specifically as it relates to network traffic data log collection. In most organizations, a large number of highly specialized network traffic data sources are collected. This creates a complex ecosystem of logs that clouds the operational workflow.  

In my experience, the first question asked by an analyst when developing new alerting content or performing incident response is "To which data source or data sources do I go to find the data I need?" I would suggest that this wastes precious resources and time. Rather, the analyst's first question should be "What questions do I need to ask of the data in order to accomplish what I have set out to do?" This necessitates a "go to" data source -- the "uber data source."

Additionally, it is helpful to highlight the difference between data value and data volume. Each data source that an organization collects will have a certain value, relevance, and usefulness to security operations. Similarly, each data source will also produce a certain volume of data when collected and warehoused. Data value and data volume do not necessarily correlate. For example, firewall logs often consume 80% of an organization's log storage resources, but actually prove quite difficult to work with when developing alerting content or performing incident response. Conversely, as an illustrative example, DHCP logs provide valuable insight to security operations, but are relatively low volume.

There is also another angle to the data value vs. data volume point. As you can imagine, collecting a large volume of less valuable logs creates two issues, among others:

  • Storage is consumed more quickly, thus reducing the retention period. (This can have a detrimental effect on security operations when performing incident response, particularly around intrusions that have been present on the network for quite some time.)
  • Queries return more slowly due to the larger volume of data. (This can have a detrimental effect on security operations when performing incident response, since answers to important questions come more slowly.)

Those who disagree with me will argue: "I can't articulate what it is, but I know that when it comes time to perform incident response, I will need something from those other data sources."  To those people, I would ask: If you're collecting so much data, irrespective of its value to security operations, that your retention period is cut to less than 30 days and your queries take hours or days to run, are you really able to use that data you've collected for incident response?  I think not.

If we take a step back, we see that the current "give me everything" approach to log collection involves a large number of highly specialized data sources. This is the case for a variety of reasons, but historical reasons and a lack of understanding regarding each data source's value to security operations are among them.  

A better way

If we think about what these data sources are conceptually, we see that they are essentially metadata from layer 4 of the OSI model (the transport layer) enriched with specific data from layer 7 of the OSI model (the application later) suiting the purpose of that particular data source. For example, DNS logs are essentially metadata from layer 4 of the OSI model enriched with additional contextual information regarding DNS queries and responses found in layer 7 of the OSI model. I would assert that there is a better way to operate without adversely affecting network visibility.

The question I asked back in 2011 was "Why not generalize this?" Why collect DNS logs as a specialized data source when the same visibility can be provided as part of a more generalized data source of higher value to security operations? It is now possible to architect network instrumentation to collect fewer data sources of higher value to security operations. This has several benefits:

  • Less redundancy and wastefulness across data sources
  • Less confusion surrounding where to go to get the required data
  • Reduced storage cost or increased retention period at the same storage cost
  • Improved query performance

There is no doubt in my mind that DNS is an incredibly valuable data source for security operations and incident response. Unfortunately, many organizations do not collect DNS data, only to find out later that not doing so has caused them great harm. Based on feedback I receive, one of the main reasons more organizations do not collect DNS data is because of the complexity, load, and cost associated with doing so. This is where the concept of the "uber data source" can help. Providing organizations a simpler, lighter-weight and cost-effective way to collect, retain, and analyze valuable layer 7 enriched metadata is a way to encourage them to collect DNS data, along with other valuable metadata. In this case, I think less is more.

Josh (Twitter: @ananalytical) is currently Director of Product Management at F5.  Previously, Josh served as VP, CTO - Emerging Technologies at FireEye and as Chief Security Officer for nPulse Technologies until its acquisition by FireEye.  Prior to joining nPulse, ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Look Beyond the 'Big 5' in Cyberattacks
Robert Lemos, Contributing Writer,  11/25/2020
Why Vulnerable Code Is Shipped Knowingly
Chris Eng, Chief Research Officer, Veracode,  11/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: I think the boss is bing watching '70s TV shows again!
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-5423
PUBLISHED: 2020-12-02
CAPI (Cloud Controller) versions prior to 1.101.0 are vulnerable to a denial-of-service attack in which an unauthenticated malicious attacker can send specially-crafted YAML files to certain endpoints, causing the YAML parser to consume excessive CPU and RAM.
CVE-2020-29454
PUBLISHED: 2020-12-02
Editors/LogViewerController.cs in Umbraco through 8.9.1 allows a user to visit a logviewer endpoint even if they lack Applications.Settings access.
CVE-2020-7199
PUBLISHED: 2020-12-02
A security vulnerability has been identified in the HPE Edgeline Infrastructure Manager, also known as HPE Edgeline Infrastructure Management Software. The vulnerability could be remotely exploited to bypass remote authentication leading to execution of arbitrary commands, gaining privileged access,...
CVE-2020-14260
PUBLISHED: 2020-12-02
HCL Domino is susceptible to a Buffer Overflow vulnerability in DXL due to improper validation of user input. A successful exploit could enable an attacker to crash Domino or execute attacker-controlled code on the server system.
CVE-2020-14305
PUBLISHED: 2020-12-02
An out-of-bounds memory write flaw was found in how the Linux kernel’s Voice Over IP H.323 connection tracking functionality handled connections on ipv6 port 1720. This flaw allows an unauthenticated remote user to crash the system, causing a denial of service. The highest threat ...