Perimeter

10/10/2018
10:30 AM
Kaan Onarlioglu
Kaan Onarlioglu
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Security Researchers Struggle with Bot Management Programs

Bots are a known problem, but researchers will tell you that bot defenses create problems of their own when it comes to valuable data.

Bot management is all the rage in the security world. Every day, I find myself bombarded with articles proclaiming that N percent of Internet traffic is generated by bots, where N is a sufficiently alarming number to make most executives want to dash out and purchase the first bot-defense product in sight. While I can't speak for the accuracy of those reports, one thing's certain: There's a growing demand for effective bot mitigation.

I know. I work for a company that develops one such bot management solution, and I talk to customers about it daily. I do enjoy having some semblance of job security, but being the recovering academic that I am, I'm also really concerned. Conducting large-scale Internet crawls is an all too common task in many fields of security research. Does the research community fully understand the implications of bot defenses on their experiments? Do they do anything about it? I am not optimistic.

Bot is a notoriously overloaded term with numerous meanings. Today, the term is understood to mean any software that performs automated tasks over the Internet. This includes malware such as those comprising a botnet, but also benign software like search engines and information aggregators. Conveniently, this definition is aligned with the features of popular bot management solutions; businesses certainly want malware protection, but they also have strong incentives to monitor, limit, block, or even serve false content to automated requests reaching their web properties.

This is a serious problem for security researchers.

Data collection via Internet crawls is a crucial part of security research. In my own work, I crawled millions of websites and scraped application stores, code repositories, forums, vulnerability databases, and more. Think about it. Researchers meticulously design experiments, build and analyze invaluable data sets in a scientific framework, and (sometimes literally) fight to publish and present their results at prestigious conferences, only to discover that their data set was tainted by a plethora of bot defenses scattered around the Internet.

In the best case, the collected data would be biased because servers equipped with bot defenses would block the connection or return a static page without meaningful content. And if worst comes to worst, servers that return false information to thwart information harvesters could make it nigh impossible to even detect that something somewhere went wrong.

I have no reason to doubt this situation significantly affects Internet crawls and measurement studies — today. In all likelihood, we regularly work with bad data, and then publish and read papers with skewed results. But we just don't yet have insights into how data collection is affected by bot defenses.

A solution is not likely to come from the business side. Widespread adoption of bot defenses won't be tapering off anytime soon. There simply isn't enough motivation for businesses to back down from their strong stance against bots; they won't forgo protection to accommodate a few innocuous crawlers among myriad malicious hits.

As far as researchers are concerned, there's always been a certain degree of awareness of anti-crawling techniques. Researchers came up with best practices such as crafting realistic request headers, limiting connection rates, and building crawlers on headless browsers. However, modern bot defenses are well-prepared to catch these tricks; they analyze browser characteristics, connection patterns, packet structure, and even hardware inputs, and combine these observations in nontrivial ways to distinguish between humans and our robot overlords.

Yes, even the most intricate defense can be reverse-engineered and bypassed given enough resources and dedication. The bar, however, is high. Faced with a growing number of evolving bot management products, researchers are perpetually at a disadvantage.

The Need for Change
We need a paradigm shift. Here is an idea: The next time we run a crawl, let's acknowledge that the entire Internet is out there to corrupt our data, and duly deal with it! Data validation is key. Questionable data collection methodologies and low-quality data sets aren't exactly unknown territory for the research community, but we need even greater focus on this issue today.

I'm all too familiar with that urge to rush through data collection and get to the more interesting data analysis (and then submit a half-decent paper minutes before a deadline). This approach is missing the mark if it leads to inaccurate measurements and incorrect conclusions.

Data validation is a hard problem, but at the same time it's a well-explored area of computer science. We have the necessary tools, like constraint validation for predictable data, or clustering to spot outliers in complex data sets. When all else fails, manual analysis combined with sampling can be a surprisingly effective and viable approach, even for extremely large datasets. It's well worth putting in the extra time and effort to systematically validate data, in addition to writing at length about the process in publications, so that the reviewers and readers know we did our part.

Finally, I'll point out that this problem has an interesting beneficial side effect: the potential to open up unique research directions. Enabling functional yet ethical crawling techniques that are also aligned with businesses' needs is one obvious route this can take. However, I also anticipate novel techniques that can scientifically quantify the impact of bot defenses on measurements.

With better insights and visibility into this issue, we can better recognize our limitations, and pursue the promising paths toward a solution.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kaan Onarlioglu is a researcher and engineer at Akamai who is interested in a wide array of systems security problems, with an emphasis on designing practical technologies with real-life impact. He works to make computers and the Internet secure — but occasionally ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
12 Free, Ready-to-Use Security Tools
Steve Zurier, Freelance Writer,  10/12/2018
Most IT Security Pros Want to Change Jobs
Dark Reading Staff 10/12/2018
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-10839
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
CVE-2018-13399
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
CVE-2018-18381
PUBLISHED: 2018-10-16
Z-BlogPHP 1.5.2.1935 (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
CVE-2018-18382
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
CVE-2018-18374
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.