Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


10:30 AM
Rami Essaid
Rami Essaid
Connect Directly
E-Mail vvv

The Bot Threat For the Rest of Us: Application-Layer Attacks

Bots are getting craftier by the day so you may not even know you have a problem.

DDoS, as we all know, garners unprecedented media attention. And the volume of coverage is a direct correlation to the size of the attack -- the larger, the better. But DDoS attacks are only one manifestation of sophisticated bot attacks that can scrape information, fraudulently fill out forms, and otherwise erode the overall website experience. What is often overlooked by the media are the application-layer bot attacks affecting almost every website on a daily basis.

These bots are capable of competitive data mining, account hijacking, and so much more. They degrade site availability, user experience, and steal competitive information. They often work under the surface, degrading a company’s brand trust, completely undetected.

Let’s face it, more than 99% of business websites are not the target of high profile, massive DDoS attacks. DDoS may receive the sexy headlines but there are more serious threats lurking under the surface. Here are three that businesses face daily.

Bad bots that secretly scan and sniff your data
The real and far more likely bot threat against your business website comes from low-profile, often overlooked bots that secretly scan and test for ways to steal business data, content and intellectual property, or penetrate your defenses. These common attack types don’t come with sexy names and eye-popping figures around Gbps and the number of infected machines. Instead, they are dull and thorough, sniffing all around your website, its forms, content, data, and applications to take whatever value your business will yield. Bot attacks such as these are termed "application-layer" attacks, and they look for vulnerabilities in your web-facing applications. Once they find a way in, they wreak havoc.

Application-layer attacks come in many forms. The most common that we have seen from our database of 30 billion known violators perform price scraping, form fraud, content theft, and database intrusion/account hacking.

Bad bots masquerading as good bots
Bots are getting craftier by the day so you may not know you have a problem. Bots targeting the application layer often mask themselves as normal site users or good bots (e.g., Google and Bing crawlers), sneaking around and grabbing what they can, from passwords and content, to application and Web server vulnerabilities. Once inside your application layer, they may remain stealth and make off with valuable business data. They may even invite in huge volumes of bots for an amplified attack (i.e. stealing millions of users’ account data).

A footnote, not a headline
Sadly, the media doesn’t pay much attention to bots. That is, until someone uses bots to amplify a website breach to produce impressive fraud or theft statistics. But even then, bots often get regulated to a footnote in the story, if at all. Both Edward Snowden and Bradley Manning used bots to perpetuate the two largest data breaches in US history. Why wasn't that ever the story?

Last year’s under-reported attack on domain name registrar Namecheap.com offers another example. In August 2014, Russian hackers assembled a list of 1.2 billion stolen usernames and passwords. Using this list, attackers leveraged bots to penetrate about 30,000 user accounts by emulating the login process of legitimate users. Namecheap.com acted by aggressively blocking IP addresses.

This media’s lack of focus on bots, paired with Namecheap’s reactive approach, resulted in a failed protection of website and customer data. Perhaps if bots made it into the headlines, or at least the stories of these breaches, then more companies would be more aware of the need to bolster their bot defenses.

How to Defend Yourself
There are several purpose-built solutions to help companies prevent bot attacks, but if you want to tackle the problem yourself, here are some steps you can take to bolster your website’s defenses.

  • Rate limit. Make sure you don’t do this based on an IP, but rather unique sessions to avoid blocking users behind a NAT. This will force bot makers to distribute their attacks across multiple machines, changing the economics of their attack.
  • Force users to execute JavaScript to access a page. There are several ways to do this, which makes it significantly harder to run simple bots, though you are still vulnerable to more advanced bot attacks. There are also lots of scripting tools (Ruby, Selenium, etc) that allow you to scrape using a real web browser, but those are more computationally expensive and a little harder to program and scale.
  • Set up strict firewall rules. Use IP blacklists, block proxy servers, TOR, Amazon EC2, etc. This will limit your exposure, but it’s important to be aware that the bad guys are still going to be more dynamic than you will be using these methods.
  • Consider adding two-factor authentication or a CAPTCHA to your username and password screen. This reduces the likelihood of brute-force login attempts. 

While none of these methods are foolproof, it is good security hygiene to be as prepared as possible. After all, just because a burglar can still break down your door, doesn’t mean you shouldn’t bother locking it.

Rami is the Co-founder and CEO of Distil Networks, the global leader in bot detection and mitigation. He began his career as the founder and CEO of Chit Chat Communications. After a successful exit, he consulted in mobile development. With over 11 years in communications, ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
7 Old IT Things Every New InfoSec Pro Should Know
Joan Goodchild, Staff Editor,  4/20/2021
Cloud-Native Businesses Struggle With Security
Robert Lemos, Contributing Writer,  5/6/2021
Defending Against Web Scraping Attacks
Rob Simon, Principal Security Consultant at TrustedSec,  5/7/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-05-15
A XSS Vulnerability in /uploads/dede/action_search.php in DedeCMS V5.7 SP2 allows an authenticated user to execute remote arbitrary code via the keyword parameter.
PUBLISHED: 2021-05-15
DedeCMS V5.7 SP2 contains a CSRF vulnerability that allows a remote attacker to send a malicious request to to the web manager allowing remote code execution.
PUBLISHED: 2021-05-14
The Linux kernel before 5.11.14 has a use-after-free in cipso_v4_genopt in net/ipv4/cipso_ipv4.c because the CIPSO and CALIPSO refcounting for the DOI definitions is mishandled, aka CID-ad5d07f4a9cd. This leads to writing an arbitrary value.
PUBLISHED: 2021-05-14
In the Linux kernel before 5.12.4, net/bluetooth/hci_event.c has a use-after-free when destroying an hci_chan, aka CID-5c4c8c954409. This leads to writing an arbitrary value.
PUBLISHED: 2021-05-14
The block subsystem in the Linux kernel before 5.2 has a use-after-free that can lead to arbitrary code execution in the kernel context and privilege escalation, aka CID-c3e2219216c9. This is related to blk_mq_free_rqs and blk_cleanup_queue.