From criminals to competitors, online bots continue to scrape information from sites and pose as legitimate users.

4 Min Read

Websites increasingly have to watch out for automated programs posing as human visitors — in other words, bots, which continue to become more sophisticated, according to a new report from bot mitigation firm Distil Networks.

While bot traffic has fallen as an overall percentage of visits to websites, the automated programs have become more sophisticated in their attempts to appear human. Financial firms, ticketing services, and educational sites see anywhere from 38% to 42% of their traffic come from bots, and both ticketing and healthcare top the industries targeted by the most sophisticated bots, according to the "2019 Bad Bot Report," based on data Distil collected during 2018.

"Bots are moving from the traditional scraping and ticketing and airlines bots, which are the industries that have been the most victimized up to now," says Edward Roberts, senior director of product marketing at Distil. "They are now moving to these other industries, and we have seen a lot of fraud cases in those markets."

Automated programs have been a key component of the Internet economy, albeit inhabiting a gray area of information collection. From automating port scanning, to collecting price information from e-commerce hubs, to the site indexing and scannings done by Google, bots have become the basis for many Internet firms' business models. 

Good bots do not harm the business models of those companies from which they scrape data. But bad bots are collecting information on behalf of competitors or, worse, are the vehicle for outright fraud. Criminals can use bots, for example, to test usernames and passwords, fraudulently boost product ratings, or conduct ad fraud. 

"Many companies are finally recognizing that they are under attack," says Amy DeMartine, principal analyst for application security at market research firm Forrester. "They go from not caring whatsoever to needing a solution right now. The problem is that they were under attack all along and didn't realize that until a specific incident."

There are some indications of improvement. Over the past year, humans have taken back a significant portion of Web visits, accounting for 62% of all traffic (up from 55% in 2017). The gains represent a flip flop from five years ago, when bots made up about 60% of all traffic, according to Distil's report.

Yet the sophistication of bots continues to increase. In November, for example, bot detection firm White Ops announced it had found a large-scale ad fraud operation, dubbed 3ve, powered by compromised PCs that drove billions of daily ad requests and netted between $3 million and $5 million per day. The investigation led to the arrests of three men and criminal charges against five more people.

More than 21% of all bad bots are considered sophisticated, according to Distil.

In another recent report, Internet infrastructure firm Akamai also warned of the increasing sophistication of bots and the operations behind them. The company found that bad bots are increasing trying to appear human or, at least, mask their origins by changing Internet addresses and modifying their digital fingerprints to match known-good applications.

"The complexity of attacking bots, rather than the volume, should be what concerns defenders most," says Martin McKeay, security researcher and editorial director at Akamai. "Bot development has moved from being an individual working on her own tools into a methodology that would't be unfamiliar to many teams in the DevOps world. The organizations selling bots are actively looking for developers with skills related to individual businesses and overcoming defenses by name."

The most sophisticated bots are impacting the ticketing business and healthcare, according to Distil. Nearly 28% of the bad bots scraping ticketing sites and reserving tickets are programs that use mouse movements, browser automation software, and malware-infected PCs to camouflage themselves as human traffic, according to Distil.

The existence of a great deal of sensitive personally identifiable information (PII) makes healthcare potentially lucrative, Distil's Roberts says. 

"Once you gather the PII, you can get a good profile of that person," he says. "If you are in healthcare, someone can get information on insurance and health conditions or fulfill a prescription that way. It is an area ripe for abuse."

While relatively new, it is a popular target for more advanced techniques, with 24% of bad bots considered "sophisticated," according to Distil's report.

Related Content

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights