Goal is to help websites detect and block bad bot traffic, vendor says.

4 Min Read

Content delivery network Cloudflare has launched a new feature that it says will help users of its services prevent malicious bots from scraping their websites, stealing credentials, misusing APIs, or launching other attacks.

Starting this week, site operators now have the option to turn on a "bot fight mode" in the firewall settings of their Cloudflare dashboards. When enabled, Cloudflare will begin "tarpitting" any automated bots on their sites it detects as being bad or malicious. It will also attempt to have the IP from which the bot originated kicked offline.

Tarpitting is a technique that some cloud service providers use to increase the cost of a bot attack to bot operators. Some tarpits work by significantly delaying responses to a bad bot request or by sending bots down blind alleys in the same way honeypots for malware work.

In Cloudflare's case, when its security mechanisms detect traffic coming from a malicious bot, it deploys CPU-intensive code that slows down the bot and forces the bot writer to expend more CPU cycles, increasing costs for them in the process.

To identify whether a bot is bad, Cloudflare analyzes data from a variety of sources, including its Gatebot DDoS mitigation system and from the over 20 million sites that use its service. The company looks at data such as abnormally high page views or bounce rates, unusually high or low session durations, and spikes in traffic from unexpected locations to automatically detect bad bots. According to Cloudflare, its bot detection mechanisms challenge some 3 billion bot requests per day.

"Tarpitting is taking measures to slow down the attack first rather than block it outright," a Cloudflare spokeswoman says. Blocking outright allows a bot to move onto another target quickly, she says. "Tarpitting allows us to impact the bot by wasting some of its time and resources," she adds. An example of this would be requiring the bot to solve a very computationally heavy math challenge, the spokeswoman notes.

The Bad Bot Problem
Such measures have become crucial because of the high and growing proportion of Internet traffic comprised of automated bots. Not all of them are malicious. Many bots, such as those used by search engines to crawl the Web or those used to monitor website metrics or for copyright violations, serve useful and often critical functions.

However, many more are used for malicious and other potentially unwanted purposes, such as for credential stuffing attacks, submitting junk data via online forms, scraping content, or breaking into user accounts. Sometimes even bots that are considered legitimate to use — such as inventory hoarding bots that lockup a retailer or ticketing website's inventory — can be a major problem.

A Distil Networks report earlier this year described nearly 38% of all Internet traffic in 2018 as comprising automated bots — both bad and good. Bad bots alone accounted for a startling 20.4% of all traffic on the Internet last year.

"Depending on the business of the organization, the problem can range from problematic to some parts of the business, such as stuffing sales leads on a website, to absolutely crippling, [such as] inventory hoarding and outright theft," the Cloudflare spokeswoman says.

Current approaches of blocking are effective in preventing one bot from attacking one website, but they do little to prevent the bot from just moving on to a softer target. "The intention of bot fight mode is to make bots spend more time and resources before being able to move on," the spokeswoman noted.

In addition to tarpitting, Cloudflare will also work to have any IP that is sending out bad bots shut down. If the provider hosting the bot happens to be a partner, Cloudflare will hand over the IP to the partner. If the provider is not a partner, Cloudfare will still notify them of the bad IP while continue to tarpit any traffic that originates from it.

Franklyn Jones, chief marketing officer at Cequence Security, says one reason for the high proportion of bad bots is the ease with which they can be deployed. "Launching an automated bot attack is a surprisingly simple process," Jones says. "It requires only previously stolen credentials, software to plan and orchestrate the launch, and a proxy infrastructure to scale and obfuscate the attack."

Because the total price tag could be just a few hundred dollars, bad actors see this strategy as a path of least resistance, he says. A survey that Osterman Research conducted on behalf of Cequence last year found that average enterprise organizations experience some 530 botnet attacks daily.

"These automated attacks have many goals, including account takeover, fake account creation, gift card fraud, content scraping, and other application business logic abuse," Jones says.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "'Playing Around' with Code Keeps Security, DevOps Skills Sharp."

About the Author(s)

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights