Invisible Invaders: Why Detecting Bot Attacks Is Becoming More Difficult
Traditional methods can't block the latest attackers, but a behavioral approach can tell the difference between bots and humans.
In a recent automated attack, a large bot army hacked into accounts using brute-force methodology and a highly accurate username and password list. PerimeterX researchers discovered that by overwhelming sites with requests from a network of tens of thousands of Internet of Things devices such as Canon printers and network devices, and with each bot sending just a single request every 10 minutes or so, the attacker completed more than 5 million attempts per day. Furthermore, the attack was successful on 8% of attempts, breaching a shocking 400,000 accounts per day.
How can such an attack be so successful? Attackers and the bots they create are in a technological arms race with companies always on the defense, trying to catch up. Next-generation bots are outsmarting companies every day. Detecting and deterring these often invisible attacks is difficult, and the standard tricks of the trade such as logfile analysis, are inadequate.
What These Next-Gen Bots Can Do
The new bots are today's sophisticated automated attackers — but they're standing on the shoulders of 20 years of bot evolution. They originate as malware, often infiltrating through a browser extension. However, these newer bots have one unique marker in common: they latch onto a host user. In effect, they're parasites. Under the guise of their host, they go undetected as they perform account takeover, malware distribution, and fraud.
Past bots could be defeated by blacklisting their IP address or detecting the absence of cookies or their inability to perform simple tasks, like running a JavaScript code. Bots eventually evolved into "headless browsers," which can run on a scripting engine that behaves like a real browser, which runs JavaScript and fully renders the pages. Headless browsers can be “outed” by challenge tests, such as asking them to render a sound or an image to prove the actual browser identity.
Because these next-gen bots are more sophisticated and look as if they're operating in a real user environment, traditional detection methods can't identify them, let alone block them.
Check out the all-star panels at the 'Understanding Cyber Attackers & Cyber Threats' event June 21 and get an in-depth look at your cyber adversaries. Click here to register.
How They Attack
Disguised as normal users, these next-generation bots perform numerous types of attacks on a company's website, but remain invisible to a Web application firewall, for example.
The attacker will find various ways to extract money from the website. These techniques include account takeover, in which the stolen accounts are then sold on the Dark Web and used for fraud, fake account creation, testing stolen credit cards, and brute-forcing gift cards by guessing their number to cash out their balances. There's also click-fraud, in which bots are instructed to invisibly browse different sites and click on ads to extract money from advertisers.
Another disruptive and damaging attack is checkout abuse. Nearly everyone has encountered this when purchasing concert tickets. Within a minute, the event is sold out, and it's guaranteed that none of the tickets was bought by a human.
Steps for Detection and Protection
Since the Internet became commercialized in the mid-1990s, nearly all bot attacks have involved bots performing functions on a website in ways that a human also could. Newer and more versatile bots are much harder to detect, as they are malware running on real users' browsers or devices, hiding behind real people's activity by shadowing their legitimate sessions and injecting hidden activities of their own. How can these bots be detected?
Signature-based systems, once the best available method for detecting bots, look for specific patterns in a request, such as a sequence of words in the request packet. They can also pattern match on malformed requests designed to find problems in how a site is set up or coded. However, this is akin to playing catch-up, with the attacks constantly changing their "look." These older defenses fail to detect next-gen bots because their increased sophistication allows them to convincingly duplicate a real user's behavior and environment, and they make requests that are indistinguishable from those made by humans.
With signature-based detection systems not offering a viable solution, companies can consider a behavioral approach, which distinguishes bots from humans. (Disclosure: PerimeterX is one of many vendors that offer behavioral-based solutions.) Behavioral approaches work by identifying behavior that is not human as opposed to recognizing known bot behavior. A simple example: humans move the mouse in a somewhat random fashion while interacting with a website page, and are certain to move the mouse toward the button before clicking it. If the mouse begins clicking the same pixel in a checkbox instantly and without any mouse movement before that, the user almost certainly isn't human.
This analysis can be applied at the user, browser, and network levels, and offers the possibility of staying ahead of the newest bad bots and their even-trickier descendants in the coming years. Companies on the offense against advanced automated attacks need to take new routes like these. Only then can they confidently answer this question: which users on our website are human?
Related Content:
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024