Perimeter
1/4/2011
12:06 PM
John H. Sawyer
John H. Sawyer
Commentary
50%
50%

Mining Web Proxy Logs For Interesting, Actionable Data

Simple statistical analysis of Web proxy logs provides wealth of information and incidents missed by AV

The importance of system logging and log analysis is often overlooked. I know it's easy to say that off-the-cuff, but I'll back it up with one of my favorite data points to come out of the Verizon Data Breach Investigation Report: 86% of the victims had evidence of the breach in their logs. As a result of that statistic, Verizon made the recommendation to "change your approach to event monitoring and log analysis."

In the past I've discussed centralized log collection and monitoring of Windows environments and the value of tools like Splunk. There's a plethora of logs within an organization that can provide insight into what's going on and when bad things are starting to happen. The problem is those logs are regularly ignored until it's too late and IT is scrambling to figure out what happened.

Antivirus logs often go unchecked with the assumption that they're working, but they can be useful in spotting attack trends and problematic users who regularly visit malicious sites. Likewise, Web proxy logs hold similar value and can be mined for a lot of useful, actionable data, like daily summaries of malicious HTTP User Agents, content types (think "executables"), and more.

Some recent research into proxy log mining turned up an interesting presentation, from Matthew Myrick of the Lawrence Livermore National Laboratory, titled "Mining Proxy Logs: Finding Needles In Haystacks." Matthew provided some excellent examples of how his team leverages its BlueCoat Web proxy logs to find "bad guys" through simple statistics, User Agents, content types, and compound searches. It's a great presentation that provides ideas of how easy it is to develop these tools in-house and perform daily analysis with little effort.

Another find during my research was a cool Ruby-based tool called LightBulb, which was created to help find automated traffic in BlueCoat Web proxy logs. The idea behind it was that malware must phone home and often does so based on a set interval. The beacon home can be found in the proxy logs by analyzing the amount of randomness of traffic to a website. Traffic with little to no randomness would indicate a regular beacon.

There are numerous other ways to slice and dice Web proxy logs to find bad things. For example, comparing a list of currently known malicious domains or Zeus malware domains and IPs to the proxy logs can help find hosts that have been attacked or infected, but not blocked by the Web proxy. And based on your experience and environment, you'll likely come up with other ways.

I think what ultimately has to happen for organizations is the realization that the logs are there and it doesn't take much work to pull interesting bits of data that can help provide better situational awareness. And, hopefully, it will help them catch something bad before they end up being another statistic in the Verizon report.

John Sawyer is a Senior Security Analyst with InGuardians. The views and opinions expressed in this blog are his own and do not represent the views and opinions of his employer. He can be reached at johnhsawyer@gmail.com

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-6477
Published: 2014-11-23
Unspecified vulnerability in the JPublisher component in Oracle Database Server 11.1.0.7, 11.2.0.3, 11.2.0.4, 12.1.0.1, and 12.1.0.2 allows remote authenticated users to affect confidentiality via unknown vectors, a different vulnerability than CVE-2014-4290, CVE-2014-4291, CVE-2014-4292, CVE-2014-4...

CVE-2014-4807
Published: 2014-11-22
Sterling Order Management in IBM Sterling Selling and Fulfillment Suite 9.3.0 before FP8 allows remote authenticated users to cause a denial of service (CPU consumption) via a '\0' character.

CVE-2014-6183
Published: 2014-11-22
IBM Security Network Protection 5.1 before 5.1.0.0 FP13, 5.1.1 before 5.1.1.0 FP8, 5.1.2 before 5.1.2.0 FP9, 5.1.2.1 before FP5, 5.2 before 5.2.0.0 FP5, and 5.3 before 5.3.0.0 FP1 on XGS devices allows remote authenticated users to execute arbitrary commands via unspecified vectors.

CVE-2014-8626
Published: 2014-11-22
Stack-based buffer overflow in the date_from_ISO8601 function in ext/xmlrpc/libxmlrpc/xmlrpc.c in PHP before 5.2.7 allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code by including a timezone field in a date, leading to improper XML-RPC encoding...

CVE-2014-8710
Published: 2014-11-22
The decompress_sigcomp_message function in epan/sigcomp-udvm.c in the SigComp UDVM dissector in Wireshark 1.10.x before 1.10.11 allows remote attackers to cause a denial of service (buffer over-read and application crash) via a crafted packet.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?