This alternative form of targeted attack by botnets has become popular as botnet tools have made bots easier to purchase and exploit. Merrick Furst, botnet expert and distinguished professor of computer science at Georgia Tech, says bots are showing up "en masse" to customer-facing websites -- posing as people.
"We are seeing tens of thousands of false registrations getting through existing defense-in-depth to get accounts on websites," says Furst, who is also a member of the board of directors at Pramana and a co-founder of Damballa, both security firms that specialize in botnet mitigation. And these bots can walk off with data from those sites, either for competitive purposes or for selling the stolen information on the black market, according to new data from Pramana, a startup that spun off from Georgia Tech.
"Instead of humans, bots are showing up en masse" on auction, social networking, and various other websites that require registration for participation or comments or webmail, he says. "If job listings are your valuable content, what if your competitors set bots to screen-scrape and take your content out the door? This screen-scraping is costing a lot of money and becoming way more prevalent."
Botnet operators are poking holes in CAPTCHA defenses. Pramana, which uses what it calls "HumanPresent" technology that looks at online activity in real-time in order to catch fraud before it occurs, saw 60 percent of bots crashing through CAPTCHAS and other defenses at one Fortune 100 client's website.
David Crowder, CEO of Pramana, says his firm sees anywhere from a couple of thousand to tens of thousands of new bots per hour registering on legit websites -- and about 200,000 in a 15-hour period. "When we saw botnets creating a couple hundred thousand accounts ... that was not how we anticipated seeing botnets in the wild," Crowder says.
This newer form of bot abuse is a result of how simple botnet technology is to acquire these days, he says, with do-it-yourself kits and underground botnet marketplaces springing up. "It's becoming so easy to get hold of. If you want to be a botmaster, for $238 you can buy it," Crowder says.
Gunter Ollmann, vice president of research at Damballa, says this type of botnet activity -- where bots are used to create phony user accounts for nefarious purposes -- has been on the rise during the past four to six months. "There are new tools or methodologies for abusing reputation systems and where abuse of these reputation systems relies on having access and control of many thousands of identities, which don't have to belong to real people, but just look like it," Ollmann says.
One type of attack is for a botnet to use extortion on sites such as eBay or Craigslist, he says. If a bad guy gets control of thousands of identities on one of these sites, he can influence the reputations of other buyers and sellers and extort money, for instance, Ollmann says. "If you're a small business, [such as] a handyman, criminals can reach out via email and explain that for a few thousand dollars they can guarantee you have dozens or hundreds of positive reviews on your service. If you refuse, they [will post] negative comments and your reputation will go down."
This approach lets the bad guys commit fraud from outside the victim organization, Georgia Tech's Furst says. So if a competitor wants to build a jobs website, he could join an existing one via bots and siphon the information for his own site, he says. "Imagine that I could turn loose an army of bots and subvert that site for my own purposes," he says.
One of Pramana's clients recently discovered that bots were stealing its requests for quotation (RFQ) off of its website. "They found their RFQ on a competitor's website," Crowder says.
And the bots often take on human qualities to blend in -- at least when it comes to some online behaviors. Pramana's Crowder says the bots do things like mimic keyboard entry by slowing down how they enter data, rather than just injecting data into online forms, for instance. "They use mouse clicks so their movements between controls will be like that of humans," he says.
Other tactics they use: operating in the light of day during business hours and, in some cases, registering a smaller number of bots in an hour. "They try to intersperse their traffic so they won't get caught. And they are almost always operating during corporate business hours, from 8 a.m. until 6 or 7 p.m.," Crowder says. "We see lots of bot activity during the busiest parts of the day."
Using identities set up on these legit websites or even on webmail accounts is a stepping stone to other cybercrime. "This opens doors to launch more interesting attacks," Damballa's Ollmann says. "Webmail tends to have a higher reputation score in anti-spam technology, so if you're sending an email via Gmail, you have a higher probability of not getting stopped by mail filtering because there's a higher trust with Gmail -- you see the same with social networking sites," he says.
Ollmann says the bad guys basically use bots to build reputable online identities that they then can use against other -- human -- users on those sites. "These details are collated and sold to other [underground] suppliers," he says.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.