Protect against misinformation and disinformation campaigns by learning how to identify the bot networks spreading falsehoods.

Kevin Graham, VP Canada & CALA Operations and Business Development, Babel Street

January 27, 2021

4 Min Read

Misinformation and disinformation have scaled in the Information Age. MIT researchers analyzed 126,000 stories between 2006 and 2016 and found that false stories spread six times faster than true stories on social networks. Whether falsehoods are deliberate (disinformation) or unintentional (misinformation), they impact every aspect of society, from politics to public health to commerce. But one thing falsehoods have in common is that they typically rely heavily on bot networks or automation for distribution. The following four social media behaviors are clues that you are dealing with a bot network versus a legitimate person or business.

1. Unnaturally Dense Relationship Networks
In order to appear important or authoritative, an account has to have a critical mass of followers or correspondence. Therefore, when a disingenuous actor is creating a network of bots, they cannot simply create an account and start reposting false information; instead, the account must be created with a network of "friends" to give it an air of authority. Since these are created accounts, they generally also create fake relationships. Bots are usually most connected to other bots. From a relationship-network perspective, a bot network exhibits unnaturally dense and interconnected organizations that have limited connectivity to real, verifiable accounts. Typically, bot networks exhibit the following traits:

  • Bots are connected, but their reach outside the network is limited.

  • The limited connections to the "real world" tend to give insight into the people and topics that the bots are designed to influence.

  • Sometimes, "master" bot accounts are given more rigorous backstopping to give the appearance of real people and often have more connections to the "real world," but the other bots within these dense networks have thin profiles.

  • "Master" bot profiles use slightly pixelated profile pictures to thwart image-matching software. 

Analyzing secondary and tertiary connections is key. Bot networks almost always sit on the periphery of the real conversation; a bot cluster is like a tumor hanging from the side of the true network. If you do an effective job of mapping the full network of relationships around a topic, then detecting these unusual, dense clusters on the periphery can be straightforward.

2. Reusing Post-Generating Algorithms
Typical human interactions involve a mix of original content, reposts from other authors, and engaging with or replying to conversation streams. In contrast, bots have little (if any) original content, repost almost exclusively, and have no engagement in actual conversations. The vast majority of bots are not sophisticated enough to effectively vary their reposted content, making it extremely easy to detect the specific sources of misinformation/disinformation they are designed to promote. Even more sophisticated bots that try to vary their content and sourcing still show high levels of automation. This is especially easy to detect when looking at the coordination across the entire bot network, as you can see how the connected network was designed to propagate a message.

3. Highly Uniform Posting Schedules
Humans post when the mood strikes, taking time out to eat, sleep, and live. Even though humans have patterns in behavior (e.g., always engaging online before work and before going to bed), they show daily variability and have regular times away (e.g., vacations). Less sophisticated bots follow strict posting schedules; for example, they often post on 24-hour cycles, leaving no time for sleep. Even the more sophisticated bots that employ randomization for posting content and have built-in downtime eventually exhibit patterns that can be identified. Analyzing the posting schedule reveals patterns that are inconsistent with human behavior.

4. Positioning to Influence Specific Audiences
The target of a bot network is typically identifiable because bot networks are tools designed for achieving specific information goals. Here are two examples. 

A series of accounts generated more than 45,000 posts, averaging 18 posts per hour, 24 hours a day (with no time for sleep, etc.). Over 80% of the content overlapped between accounts. But the final piece of the puzzle came by looking at the external connections. In this case, the bot network was pushing content from aspiring authors, songwriters, and artists. You could see that these verifiable artists had likely purchased services designed to increase their social following that employ bot networks for increasing follower counts and sending a signal that an artist is an up-and-comer breaking onto the scene. 

While investigating foreign influence regarding policy toward the Syrian Civil War, we discovered an account and subsequent network where every influential account voiced deep mistrust of the West and significant support for all Russian geopolitical positions. All of the accounts in this network reposted each other, creating a pro-Russian, anti-Western "echo chamber" that was designed to promote Russian policies throughout Europe and the West. 

Look for Clues
Bot networks are common vectors for false information, but there are certain behaviors and traits to look for that can tip you off that these accounts aren't backed by independent people or businesses. Put these clues to work the next time you're confronted with questionable information to keep falsehoods from spreading.

About the Author(s)

Kevin Graham

VP Canada & CALA Operations and Business Development, Babel Street

Kevin Graham served as an active-duty US Marine Corps infantryman before continuing his service as a government civilian. His career as an intelligence professional has provided numerous deployments around the world, serving in a multitude of capacities while supporting global counterterrorism and hostage rescue operations. Kevin left the government after more than 20 years of service and joined Babel Street two years ago.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights