Threat Intelligence

8/6/2018
10:20 AM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Spot the Bot: Researchers Open-Source Tools to Hunt Twitter Bots

Their goal? To create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it.

What makes Twitter bots tick? Two researchers from Duo Security wanted to find out, so they designed bot-chasing tools and techniques to separate automated accounts from real ones.

Automated Twitter profiles have made headlines for spreading malware and influencing online opinion. Earlier research has dug into the process of creating Twitter datasets and finding potential bots, but none has discussed how researchers can find automated accounts on their own.

Duo's Olabode Anise, data scientist, and Jordan Wright, principal R&D engineer, began their project to learn about how they could pinpoint characteristics of Twitter bots regardless of whether they were harmful. Hackers of all intentions can build bots and use them on Twitter.

The goal was to create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it. They'll present their tactics and findings this week at Black Hat in a session entitled "Don't @ Me: Hunting Twitter Bots at Scale."

Anise and Weight began by compiling and analyzing 88 million Twitter accounts and their usernames, tweet count, followers/following counts, avatar, and description, all of which would serve as a massive dataset in which they could hunt for bots. The data dates from May to July 2018 and was pulled via the Twitter API used to access public data, Wright explains.

"We wanted to make sure we were playing by the rules," Wright notes, since doing otherwise would compromise other researchers' ability to build on their work using the same method. "We're not trying to go around the API and go around limits and tools in place to get more data."

Once they obtained a dataset, the researchers created a "classifier," which detected bots in their massive pool of information by hunting for traits specific to bot accounts. But first they had to determine the details and behaviors that set bots apart.

What Makes Bots Bots?
Indeed, one of the researchers' goals was to learn the key traits of bot accounts, how they are controlled, and how they connect. "The thing about bot accounts is they can come up with identifying characteristics," Anise explains. Traits may change depending on the operator.

Bot accounts are hyperactive: Their likes and retweets are constant throughout the day and into the night. They reply to tweets quickly, Wright says. If a tweet has more than 30 replies within a few seconds, they can deduce bot activity is to blame. An account's number of followers and following can also indicate bot activity depending on when the account was created. If a profile is fairly new and has tens of thousands of followers, it's another suspicious sign.

In their research, Anise and Wright came up with 20 of these defining traits, which also included the number of unique accounts being retweeted, number of tweets with the same content per data, number of daily tweets relative to account age, percentage of retweets with URLs, ratio of tweets with photos vs. text only, number of hashtags per tweet, and distance between geolocated tweets.

Hunting Bots on the Web
The researchers' classifier tool dug through the data and leveraged these filters to detect automated accounts. Once they found initial sets of bots, they took further steps to determine whether the bots were isolated or part of a larger botnet controlled by a single operator.

"We could still use very straightforward characteristics to accurately find new bots," Wright says. "Bots at a larger scale, in general, are using many of the same techniques they have in the past few years." Some bots evolve more quickly than others depending on the operator's goals.

Their tool may have been accurate for this dataset, but Anise says many bot accounts are subtly disguised. Oftentimes accounts appeared to be normal but displayed botlike attributes.

In May, for example, the pair found a cryptocurrency botnet made up of automated accounts, which spoofed legitimate Twitter accounts to spread a giveaway scam. Spoofed accounts had randomly generated usernames and copied legitimate users' photos. They spread spam by replying to real tweets posted by real users, inviting them to join a cryptocurrency giveaway.

The botnet, like many of its kind, used several methods to evade detection. Oftentimes, malicious bots spoof celebrities and high-profile accounts as well as cryptocurrency accounts, edit profile photos to avoid image detection, and use screen names that are typos of real ones. This one went on to impersonate Elon Musk and news organizations such as CNN and Wired.

Joining the Bot Hunters
Anise and Wright are open-sourcing the tools and techniques they used to conduct their research in an effort to help other researchers build on their work and create new methodologies to identify malicious Twitter bots.

"It's a really complex problem," Anise adds. They want to map out their strategy and show how other people can use their work to continue mapping bots and botnet structures.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
'PowerSnitch' Hacks Androids via Power Banks
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/8/2018
Higher Education: 15 Books to Help Cybersecurity Pros Be Better
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
10 Best Practices That Could Reshape Your IT Security Department
This Dark Reading Tech Digest, explores ten best practices that could reshape IT security departments.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-6706
PUBLISHED: 2018-12-12
Insecure handling of temporary files in non-Windows McAfee Agent 5.0.0 through 5.0.6, 5.5.0, and 5.5.1 allows an Unprivileged User to introduce custom paths during agent installation in Linux via unspecified vectors.
CVE-2018-6705
PUBLISHED: 2018-12-12
Privilege escalation vulnerability in McAfee Agent (MA) for Linux 5.0.0 through 5.0.6, 5.5.0, and 5.5.1 allows local users to perform arbitrary command execution via specific conditions.
CVE-2018-15717
PUBLISHED: 2018-12-12
Open Dental before version 18.4 stores user passwords as base64 encoded MD5 hashes.
CVE-2018-15718
PUBLISHED: 2018-12-12
Open Dental before version 18.4 transmits the entire user database over the network when a remote unathenticated user accesses the command prompt. This allows the attacker to gain access to usernames, password hashes, privilege levels, and more.
CVE-2018-15719
PUBLISHED: 2018-12-12
Open Dental before version 18.4 installs a mysql database and uses the default credentials of "root" with a blank password. This allows anyone on the network with access to the server to access all database information.