Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

8/6/2018
10:20 AM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Spot the Bot: Researchers Open-Source Tools to Hunt Twitter Bots

Their goal? To create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it.

What makes Twitter bots tick? Two researchers from Duo Security wanted to find out, so they designed bot-chasing tools and techniques to separate automated accounts from real ones.

Automated Twitter profiles have made headlines for spreading malware and influencing online opinion. Earlier research has dug into the process of creating Twitter datasets and finding potential bots, but none has discussed how researchers can find automated accounts on their own.

Duo's Olabode Anise, data scientist, and Jordan Wright, principal R&D engineer, began their project to learn about how they could pinpoint characteristics of Twitter bots regardless of whether they were harmful. Hackers of all intentions can build bots and use them on Twitter.

The goal was to create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it. They'll present their tactics and findings this week at Black Hat in a session entitled "Don't @ Me: Hunting Twitter Bots at Scale."

Anise and Weight began by compiling and analyzing 88 million Twitter accounts and their usernames, tweet count, followers/following counts, avatar, and description, all of which would serve as a massive dataset in which they could hunt for bots. The data dates from May to July 2018 and was pulled via the Twitter API used to access public data, Wright explains.

"We wanted to make sure we were playing by the rules," Wright notes, since doing otherwise would compromise other researchers' ability to build on their work using the same method. "We're not trying to go around the API and go around limits and tools in place to get more data."

Once they obtained a dataset, the researchers created a "classifier," which detected bots in their massive pool of information by hunting for traits specific to bot accounts. But first they had to determine the details and behaviors that set bots apart.

What Makes Bots Bots?
Indeed, one of the researchers' goals was to learn the key traits of bot accounts, how they are controlled, and how they connect. "The thing about bot accounts is they can come up with identifying characteristics," Anise explains. Traits may change depending on the operator.

Bot accounts are hyperactive: Their likes and retweets are constant throughout the day and into the night. They reply to tweets quickly, Wright says. If a tweet has more than 30 replies within a few seconds, they can deduce bot activity is to blame. An account's number of followers and following can also indicate bot activity depending on when the account was created. If a profile is fairly new and has tens of thousands of followers, it's another suspicious sign.

In their research, Anise and Wright came up with 20 of these defining traits, which also included the number of unique accounts being retweeted, number of tweets with the same content per data, number of daily tweets relative to account age, percentage of retweets with URLs, ratio of tweets with photos vs. text only, number of hashtags per tweet, and distance between geolocated tweets.

Hunting Bots on the Web
The researchers' classifier tool dug through the data and leveraged these filters to detect automated accounts. Once they found initial sets of bots, they took further steps to determine whether the bots were isolated or part of a larger botnet controlled by a single operator.

"We could still use very straightforward characteristics to accurately find new bots," Wright says. "Bots at a larger scale, in general, are using many of the same techniques they have in the past few years." Some bots evolve more quickly than others depending on the operator's goals.

Their tool may have been accurate for this dataset, but Anise says many bot accounts are subtly disguised. Oftentimes accounts appeared to be normal but displayed botlike attributes.

In May, for example, the pair found a cryptocurrency botnet made up of automated accounts, which spoofed legitimate Twitter accounts to spread a giveaway scam. Spoofed accounts had randomly generated usernames and copied legitimate users' photos. They spread spam by replying to real tweets posted by real users, inviting them to join a cryptocurrency giveaway.

The botnet, like many of its kind, used several methods to evade detection. Oftentimes, malicious bots spoof celebrities and high-profile accounts as well as cryptocurrency accounts, edit profile photos to avoid image detection, and use screen names that are typos of real ones. This one went on to impersonate Elon Musk and news organizations such as CNN and Wired.

Joining the Bot Hunters
Anise and Wright are open-sourcing the tools and techniques they used to conduct their research in an effort to help other researchers build on their work and create new methodologies to identify malicious Twitter bots.

"It's a really complex problem," Anise adds. They want to map out their strategy and show how other people can use their work to continue mapping bots and botnet structures.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
7 Truths About BEC Scams
Ericka Chickowski, Contributing Writer,  6/13/2019
DNS Firewalls Could Prevent Billions in Losses to Cybercrime
Curtis Franklin Jr., Senior Editor at Dark Reading,  6/13/2019
Cognitive Bias Can Hamper Security Decisions
Kelly Sheridan, Staff Editor, Dark Reading,  6/10/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-12855
PUBLISHED: 2019-06-16
In words.protocols.jabber.xmlstream in Twisted through 19.2.1, XMPP support did not verify certificates when used with TLS, allowing an attacker to MITM connections.
CVE-2013-7472
PUBLISHED: 2019-06-15
The "Count per Day" plugin before 3.2.6 for WordPress allows XSS via the wp-admin/?page=cpd_metaboxes daytoshow parameter.
CVE-2019-12839
PUBLISHED: 2019-06-15
In OrangeHRM 4.3.1 and before, there is an input validation error within admin/listMailConfiguration (txtSendmailPath parameter) that allows authenticated attackers to achieve arbitrary command execution.
CVE-2019-12840
PUBLISHED: 2019-06-15
In Webmin through 1.910, any user authorized to the "Package Updates" module can execute arbitrary commands with root privileges via the data parameter to update.cgi.
CVE-2019-12835
PUBLISHED: 2019-06-15
formats/xml.cpp in Leanify 0.4.3 allows for a controlled out-of-bounds write in xml_memory_writer::write via characters that require escaping.