Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

10/15/2015
11:00 AM
Joshua Goldfarb
Joshua Goldfarb
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
50%
50%

An Atypical Approach To DNS

It's now possible to architect network instrumentation to collect fewer data sources of higher value to security operations. Here's how -- and why -- you should care.

The idea for this column started when a colleague asked me a few questions about analyzing metadata generated from network traffic. We discussed a few possibilities, including various different types of metadata. A few minutes after our call, I recalled an email thread I had seen discussing ways to collect metadata about DNS transactions. Most of the solutions offered on that thread involved deploying new equipment to collect additional telemetry data. 

Suffice it to say, based on the discussions I have with customers, yet another piece of equipment to deploy, manage, and maintain is not high on their wish list. On the other hand, DNS data is extremely valuable for security operations and incident response. So how can we rectify this discrepancy? Perhaps taking an atypical approach to DNS is the answer.

Almost five years ago, I gave a talk at the GFIRST conference entitled "Uber Data Source: Holy Grail or Final Fantasy?" In this talk, I proposed that given the volume and complexity that a larger number of highly specialized data sources brings to security operations, it makes sense to think about moving towards a smaller number of more generalized data sources.  

One could also imagine taking this concept further, ultimately resulting in an "uber data source." Or, how about a more detailed discussion working within the context of network traffic data sources?  I consider host (AV logs), system (Linux syslogs), and/or application level (web server logs) data sources beyond the context of what I would consider generalizing into an uber data source, at least for network traffic data and meta-data.

Begin at the beginning

Let's start by looking at the current state of security operations in most organizations, specifically as it relates to network traffic data log collection. In most organizations, a large number of highly specialized network traffic data sources are collected. This creates a complex ecosystem of logs that clouds the operational workflow.  

In my experience, the first question asked by an analyst when developing new alerting content or performing incident response is "To which data source or data sources do I go to find the data I need?" I would suggest that this wastes precious resources and time. Rather, the analyst's first question should be "What questions do I need to ask of the data in order to accomplish what I have set out to do?" This necessitates a "go to" data source -- the "uber data source."

Additionally, it is helpful to highlight the difference between data value and data volume. Each data source that an organization collects will have a certain value, relevance, and usefulness to security operations. Similarly, each data source will also produce a certain volume of data when collected and warehoused. Data value and data volume do not necessarily correlate. For example, firewall logs often consume 80% of an organization's log storage resources, but actually prove quite difficult to work with when developing alerting content or performing incident response. Conversely, as an illustrative example, DHCP logs provide valuable insight to security operations, but are relatively low volume.

There is also another angle to the data value vs. data volume point. As you can imagine, collecting a large volume of less valuable logs creates two issues, among others:

  • Storage is consumed more quickly, thus reducing the retention period. (This can have a detrimental effect on security operations when performing incident response, particularly around intrusions that have been present on the network for quite some time.)
  • Queries return more slowly due to the larger volume of data. (This can have a detrimental effect on security operations when performing incident response, since answers to important questions come more slowly.)

Those who disagree with me will argue: "I can't articulate what it is, but I know that when it comes time to perform incident response, I will need something from those other data sources."  To those people, I would ask: If you're collecting so much data, irrespective of its value to security operations, that your retention period is cut to less than 30 days and your queries take hours or days to run, are you really able to use that data you've collected for incident response?  I think not.

If we take a step back, we see that the current "give me everything" approach to log collection involves a large number of highly specialized data sources. This is the case for a variety of reasons, but historical reasons and a lack of understanding regarding each data source's value to security operations are among them.  

A better way

If we think about what these data sources are conceptually, we see that they are essentially metadata from layer 4 of the OSI model (the transport layer) enriched with specific data from layer 7 of the OSI model (the application later) suiting the purpose of that particular data source. For example, DNS logs are essentially metadata from layer 4 of the OSI model enriched with additional contextual information regarding DNS queries and responses found in layer 7 of the OSI model. I would assert that there is a better way to operate without adversely affecting network visibility.

The question I asked back in 2011 was "Why not generalize this?" Why collect DNS logs as a specialized data source when the same visibility can be provided as part of a more generalized data source of higher value to security operations? It is now possible to architect network instrumentation to collect fewer data sources of higher value to security operations. This has several benefits:

  • Less redundancy and wastefulness across data sources
  • Less confusion surrounding where to go to get the required data
  • Reduced storage cost or increased retention period at the same storage cost
  • Improved query performance

There is no doubt in my mind that DNS is an incredibly valuable data source for security operations and incident response. Unfortunately, many organizations do not collect DNS data, only to find out later that not doing so has caused them great harm. Based on feedback I receive, one of the main reasons more organizations do not collect DNS data is because of the complexity, load, and cost associated with doing so. This is where the concept of the "uber data source" can help. Providing organizations a simpler, lighter-weight and cost-effective way to collect, retain, and analyze valuable layer 7 enriched metadata is a way to encourage them to collect DNS data, along with other valuable metadata. In this case, I think less is more.

Josh (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs.  Previously, Josh served as VP, CTO - Emerging Technologies at FireEye and as Chief Security Officer for ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Unreasonable Security Best Practices vs. Good Risk Management
Jack Freund, Director, Risk Science at RiskLens,  11/13/2019
Breaches Are Inevitable, So Embrace the Chaos
Ariel Zeitlin, Chief Technology Officer & Co-Founder, Guardicore,  11/13/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19010
PUBLISHED: 2019-11-16
Eval injection in the Math plugin of Limnoria (before 2019.11.09) and Supybot (through 2018-05-09) allows remote unprivileged attackers to disclose information or possibly have unspecified other impact via the calc and icalc IRC commands.
CVE-2019-16761
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the [email protected] npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. All versions >1.0...
CVE-2019-16762
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the slpjs npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. Affected users can upgrade to any...
CVE-2019-13581
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A heap-based buffer overflow allows remote attackers to cause a denial of service or execute arbitrary ...
CVE-2019-13582
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A stack overflow could lead to denial of service or arbitrary code execution.