Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

1/27/2021
01:00 PM
Kevin Graham
Kevin Graham
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

4 Clues to Spot a Bot Network

Protect against misinformation and disinformation campaigns by learning how to identify the bot networks spreading falsehoods.

Misinformation and disinformation have scaled in the Information Age. MIT researchers analyzed 126,000 stories between 2006 and 2016 and found that false stories spread six times faster than true stories on social networks. Whether falsehoods are deliberate (disinformation) or unintentional (misinformation), they impact every aspect of society, from politics to public health to commerce. But one thing falsehoods have in common is that they typically rely heavily on bot networks or automation for distribution. The following four social media behaviors are clues that you are dealing with a bot network versus a legitimate person or business.

Related Content:

How to Protect Your Organization's Digital Footprint

Special Report: Understanding Your Cyber Attackers

4 Intriguing Email Attacks Detected by AI in 2020

1. Unnaturally Dense Relationship Networks
In order to appear important or authoritative, an account has to have a critical mass of followers or correspondence. Therefore, when a disingenuous actor is creating a network of bots, they cannot simply create an account and start reposting false information; instead, the account must be created with a network of "friends" to give it an air of authority. Since these are created accounts, they generally also create fake relationships. Bots are usually most connected to other bots. From a relationship-network perspective, a bot network exhibits unnaturally dense and interconnected organizations that have limited connectivity to real, verifiable accounts. Typically, bot networks exhibit the following traits:

  • Bots are connected, but their reach outside the network is limited.
  • The limited connections to the "real world" tend to give insight into the people and topics that the bots are designed to influence.
  • Sometimes, "master" bot accounts are given more rigorous backstopping to give the appearance of real people and often have more connections to the "real world," but the other bots within these dense networks have thin profiles.
  • "Master" bot profiles use slightly pixelated profile pictures to thwart image-matching software. 

Analyzing secondary and tertiary connections is key. Bot networks almost always sit on the periphery of the real conversation; a bot cluster is like a tumor hanging from the side of the true network. If you do an effective job of mapping the full network of relationships around a topic, then detecting these unusual, dense clusters on the periphery can be straightforward.

2. Reusing Post-Generating Algorithms
Typical human interactions involve a mix of original content, reposts from other authors, and engaging with or replying to conversation streams. In contrast, bots have little (if any) original content, repost almost exclusively, and have no engagement in actual conversations. The vast majority of bots are not sophisticated enough to effectively vary their reposted content, making it extremely easy to detect the specific sources of misinformation/disinformation they are designed to promote. Even more sophisticated bots that try to vary their content and sourcing still show high levels of automation. This is especially easy to detect when looking at the coordination across the entire bot network, as you can see how the connected network was designed to propagate a message.

3. Highly Uniform Posting Schedules
Humans post when the mood strikes, taking time out to eat, sleep, and live. Even though humans have patterns in behavior (e.g., always engaging online before work and before going to bed), they show daily variability and have regular times away (e.g., vacations). Less sophisticated bots follow strict posting schedules; for example, they often post on 24-hour cycles, leaving no time for sleep. Even the more sophisticated bots that employ randomization for posting content and have built-in downtime eventually exhibit patterns that can be identified. Analyzing the posting schedule reveals patterns that are inconsistent with human behavior.

4. Positioning to Influence Specific Audiences
The target of a bot network is typically identifiable because bot networks are tools designed for achieving specific information goals. Here are two examples. 

A series of accounts generated more than 45,000 posts, averaging 18 posts per hour, 24 hours a day (with no time for sleep, etc.). Over 80% of the content overlapped between accounts. But the final piece of the puzzle came by looking at the external connections. In this case, the bot network was pushing content from aspiring authors, songwriters, and artists. You could see that these verifiable artists had likely purchased services designed to increase their social following that employ bot networks for increasing follower counts and sending a signal that an artist is an up-and-comer breaking onto the scene. 

While investigating foreign influence regarding policy toward the Syrian Civil War, we discovered an account and subsequent network where every influential account voiced deep mistrust of the West and significant support for all Russian geopolitical positions. All of the accounts in this network reposted each other, creating a pro-Russian, anti-Western "echo chamber" that was designed to promote Russian policies throughout Europe and the West. 

Look for Clues
Bot networks are common vectors for false information, but there are certain behaviors and traits to look for that can tip you off that these accounts aren't backed by independent people or businesses. Put these clues to work the next time you're confronted with questionable information to keep falsehoods from spreading.

Kevin Graham served as an active-duty US Marine Corps infantryman before continuing his service as a government civilian. His career as an intelligence professional has provided numerous deployments around the world, serving in a multitude of capacities while supporting ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
News
Inside the Ransomware Campaigns Targeting Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/2/2021
Commentary
Beyond MITRE ATT&CK: The Case for a New Cyber Kill Chain
Rik Turner, Principal Analyst, Infrastructure Solutions, Omdia,  3/30/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-23381
PUBLISHED: 2021-04-18
This affects all versions of package killing. If attacker-controlled user input is given, it is possible for an attacker to execute arbitrary commands. This is due to use of the child_process exec function without input sanitization.
CVE-2021-23374
PUBLISHED: 2021-04-18
This affects all versions of package ps-visitor. If attacker-controlled user input is given to the kill function, it is possible for an attacker to execute arbitrary commands. This is due to use of the child_process exec function without input sanitization.
CVE-2021-23375
PUBLISHED: 2021-04-18
This affects all versions of package psnode. If attacker-controlled user input is given to the kill function, it is possible for an attacker to execute arbitrary commands. This is due to use of the child_process exec function without input sanitization.
CVE-2021-23376
PUBLISHED: 2021-04-18
This affects all versions of package ffmpegdotjs. If attacker-controlled user input is given to the trimvideo function, it is possible for an attacker to execute arbitrary commands. This is due to use of the child_process exec function without input sanitization.
CVE-2021-23377
PUBLISHED: 2021-04-18
This affects all versions of package onion-oled-js. If attacker-controlled user input is given to the scroll function, it is possible for an attacker to execute arbitrary commands. This is due to use of the child_process exec function without input sanitization.