Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

1/27/2021
01:00 PM
Kevin Graham
Kevin Graham
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

4 Clues to Spot a Bot Network

Protect against misinformation and disinformation campaigns by learning how to identify the bot networks spreading falsehoods.

Misinformation and disinformation have scaled in the Information Age. MIT researchers analyzed 126,000 stories between 2006 and 2016 and found that false stories spread six times faster than true stories on social networks. Whether falsehoods are deliberate (disinformation) or unintentional (misinformation), they impact every aspect of society, from politics to public health to commerce. But one thing falsehoods have in common is that they typically rely heavily on bot networks or automation for distribution. The following four social media behaviors are clues that you are dealing with a bot network versus a legitimate person or business.

Related Content:

How to Protect Your Organization's Digital Footprint

Special Report: Understanding Your Cyber Attackers

4 Intriguing Email Attacks Detected by AI in 2020

1. Unnaturally Dense Relationship Networks
In order to appear important or authoritative, an account has to have a critical mass of followers or correspondence. Therefore, when a disingenuous actor is creating a network of bots, they cannot simply create an account and start reposting false information; instead, the account must be created with a network of "friends" to give it an air of authority. Since these are created accounts, they generally also create fake relationships. Bots are usually most connected to other bots. From a relationship-network perspective, a bot network exhibits unnaturally dense and interconnected organizations that have limited connectivity to real, verifiable accounts. Typically, bot networks exhibit the following traits:

  • Bots are connected, but their reach outside the network is limited.
  • The limited connections to the "real world" tend to give insight into the people and topics that the bots are designed to influence.
  • Sometimes, "master" bot accounts are given more rigorous backstopping to give the appearance of real people and often have more connections to the "real world," but the other bots within these dense networks have thin profiles.
  • "Master" bot profiles use slightly pixelated profile pictures to thwart image-matching software. 

Analyzing secondary and tertiary connections is key. Bot networks almost always sit on the periphery of the real conversation; a bot cluster is like a tumor hanging from the side of the true network. If you do an effective job of mapping the full network of relationships around a topic, then detecting these unusual, dense clusters on the periphery can be straightforward.

2. Reusing Post-Generating Algorithms
Typical human interactions involve a mix of original content, reposts from other authors, and engaging with or replying to conversation streams. In contrast, bots have little (if any) original content, repost almost exclusively, and have no engagement in actual conversations. The vast majority of bots are not sophisticated enough to effectively vary their reposted content, making it extremely easy to detect the specific sources of misinformation/disinformation they are designed to promote. Even more sophisticated bots that try to vary their content and sourcing still show high levels of automation. This is especially easy to detect when looking at the coordination across the entire bot network, as you can see how the connected network was designed to propagate a message.

3. Highly Uniform Posting Schedules
Humans post when the mood strikes, taking time out to eat, sleep, and live. Even though humans have patterns in behavior (e.g., always engaging online before work and before going to bed), they show daily variability and have regular times away (e.g., vacations). Less sophisticated bots follow strict posting schedules; for example, they often post on 24-hour cycles, leaving no time for sleep. Even the more sophisticated bots that employ randomization for posting content and have built-in downtime eventually exhibit patterns that can be identified. Analyzing the posting schedule reveals patterns that are inconsistent with human behavior.

4. Positioning to Influence Specific Audiences
The target of a bot network is typically identifiable because bot networks are tools designed for achieving specific information goals. Here are two examples. 

A series of accounts generated more than 45,000 posts, averaging 18 posts per hour, 24 hours a day (with no time for sleep, etc.). Over 80% of the content overlapped between accounts. But the final piece of the puzzle came by looking at the external connections. In this case, the bot network was pushing content from aspiring authors, songwriters, and artists. You could see that these verifiable artists had likely purchased services designed to increase their social following that employ bot networks for increasing follower counts and sending a signal that an artist is an up-and-comer breaking onto the scene. 

While investigating foreign influence regarding policy toward the Syrian Civil War, we discovered an account and subsequent network where every influential account voiced deep mistrust of the West and significant support for all Russian geopolitical positions. All of the accounts in this network reposted each other, creating a pro-Russian, anti-Western "echo chamber" that was designed to promote Russian policies throughout Europe and the West. 

Look for Clues
Bot networks are common vectors for false information, but there are certain behaviors and traits to look for that can tip you off that these accounts aren't backed by independent people or businesses. Put these clues to work the next time you're confronted with questionable information to keep falsehoods from spreading.

Kevin Graham served as an active-duty US Marine Corps infantryman before continuing his service as a government civilian. His career as an intelligence professional has provided numerous deployments around the world, serving in a multitude of capacities while supporting ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
Former CISA Director Chris Krebs Discusses Risk Management & Threat Intel
Kelly Sheridan, Staff Editor, Dark Reading,  2/23/2021
Edge-DRsplash-10-edge-articles
Security + Fraud Protection: Your One-Two Punch Against Cyberattacks
Joshua Goldfarb, Director of Product Management at F5,  2/23/2021
News
Cybercrime Groups More Prolific, Focus on Healthcare in 2020
Robert Lemos, Contributing Writer,  2/22/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Building the SOC of the Future
Building the SOC of the Future
Digital transformation, cloud-focused attacks, and a worldwide pandemic. The past year has changed the way business works and the way security teams operate. There is no going back.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-27132
PUBLISHED: 2021-02-27
SerComm AG Combo VD625 AGSOT_2.1.0 devices allow CRLF injection (for HTTP header injection) in the download function via the Content-Disposition header.
CVE-2021-25284
PUBLISHED: 2021-02-27
An issue was discovered in through SaltStack Salt before 3002.5. salt.modules.cmdmod can log credentials to the info or error log level.
CVE-2021-3144
PUBLISHED: 2021-02-27
In SaltStack Salt before 3002.5, eauth tokens can be used once after expiration. (They might be used to run command against the salt master or minions.)
CVE-2021-3148
PUBLISHED: 2021-02-27
An issue was discovered in SaltStack Salt before 3002.5. Sending crafted web requests to the Salt API can result in salt.utils.thin.gen_thin() command injection because of different handling of single versus double quotes. This is related to salt/utils/thin.py.
CVE-2021-3151
PUBLISHED: 2021-02-27
i-doit before 1.16.0 is affected by Stored Cross-Site Scripting (XSS) issues that could allow remote authenticated attackers to inject arbitrary web script or HTML via C__MONITORING__CONFIG__TITLE, SM2__C__MONITORING__CONFIG__TITLE, C__MONITORING__CONFIG__PATH, SM2__C__MONITORING__CONFIG__PATH, C__M...