6 Truths About Disinformation Campaigns
Disinformation goes far beyond just influencing election outcomes. Here's what security pros need to know.
February 28, 2020
![](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blteea22ce91dfed2cc/64f0d33109cfa45b2012bfb5/01-disinfo.jpg?width=700&auto=webp&quality=80&disable=upscale)
Exploding social media use and the growing availability of software bots and tools for manipulating video and other online content have made it easier than ever for bad actors to conduct broad disinformation campaigns.
While many tend to think of these campaigns as being mostly aimed at influencing election outcomes, the reality is that disinformation impacts a lot more than just politics and political leaders.
Recently, governments, hacktivists, and other threat actors have begun using disinformation and propaganda to push various partisan agendas, including those tied to health emergencies like the Coronavirus, religious beliefs, and financial markets. Security experts expect those with malintent to increasingly use disinformation campaigns to try and harm companies' brands and reputations, spread rumors about business leaders, and hurt organizations financially.
"Disinformation is as old as communication. It just happens to take a new form," says Chris Morales, head of security analytics at Vectra. "Fighting disinformation is hard and comes down to what people will and will not believe."
Following are six things to know about disinformation campaigns.
As in 2016, American voters can expect to be bombarded with disinformation and fake news in the run up to the general elections this November. But it won't be just Russian trolls and other foreign actors disseminating the online lies this time around.
Researchers at New York University's Stern Center for Business and Human Rights are among many who believe that most of the disinformation around the 2020 elections will be domestically generated. Right- and left-wing groups in the US will try to sow confusion, spread propaganda, and exacerbate political and social divisions with false and misleading narratives and imagery about political candidates and parties, according to the NYU researchers. Bad actors will mainly use social media platforms -- in particular, Facebook, Twitter, YouTube, and WhatsApp -- to disseminate the disinformation and mislead in an effort to influence the outcome of the elections. Some will try to profit from the situation by establishing fake news sites loaded with pay-per-click ads pushing highly sensationalized and exaggerated stories designed to draw traffic to them, security Nisos warned in a report last year.
According to a 2019 study by the University of Oxford there's evidence of organized social media manipulation in at least 70 countries. In each of those countries, at least one political party or government agency is using social media to shape public attitudes.
In addition to domestic actors, state-backed actors from Russia, Iran, China, and other countries are expected to be very active around the 2020 elections. "With the 2020 US election [cycle] well underway, the potential for state actors such as Iran, North Korea, the Russian Federation, and other political groups to get involved in spreading disinformation [will rise]," says Fausto Oliveira, principal security architect at Acceptto. "There is evidence that attacks have already been attempted, and as the campaign progresses, I expect threat actors and white hats to further set up their efforts."
Disinformation campaigns targeting businesses and enterprise organizations are emerging as a new threat. Software bots, tools that employ machine learning, and natural language-generation software are making it easier for threat actors to create and spread disinformation about businesses and business leaders, according to consulting firm Deloitte.
State-actors, criminal groups, disgruntled employees, and business rivals have begun employing these tools to create mischief against organizations in the form of fake reviews, fake stories about alleged discrimination in the workplace, misquoted executive statements, and manipulated videos. In one instance, a retail chain experienced brand damage and increased public scrutiny after fake reports about employee discrimination. In another instance, a company experienced lost sales and a drop in its stock price following a similar disinformation campaign on social media.
One publicly reported incident involved restaurant chain Olive Garden. In August 2019, Twitter posts urging people to boycott the company went viral after a false report surfaced about Olive Garden contributing to US President Donald Trump's re-election campaign. Liberal-leaning accounts quickly spread the tweet via retweets and quote-tweets. calling on people to boycott the restaurant chain, while conservative-leaning accounts urged people to rally behind Olive Garden. According to security firm Nisos, which tracked the incident, #BoycottOliveGarden received more than 52,500 mentions on Twitter from 48,700 users and generated over 139 million impressions.
Such campaigns can impact organizations in multiple ways, Deloitte said in its report. Potential consequences include brand and reputational damage, loss of public trust, and financial losses, the consulting firm said.
New artificial-intelligence and machine-learning tools are making it much easier for people to manipulate video, image, and audio content in almost imperceptible ways -- but to significant effect. Researchers have shown such manipulated media, or "deepfakes," can be used to make it appear like someone said something they didn't, or to swap faces in videos and images, or to create realistic videos of someone speaking from just a single photo and audio file.
Some well-known examples of deepfakes include a fake video of former US President Barack Obama apparently saying something disparaging about President Trump, and another about House Speaker Nancy Pelosi purportedly stammering in the middle of speech.
Fears that manipulated videos of high-profile people, including politicians and business leaders, could be used to spread very convincing-appearing disinformation are growing. Last June, the US House Select Committee on Intelligence held a public hearing to discuss the national security ramifications of deepfakes and other AI-manipulated media. In prepared comments, the chairman of the committee, Adam Schiff, described deepfakes as presenting a potentially even more sinister form of deception and disinformation by foreign and domestic actors than that carried out by Russian actors in the 2016 elections. Deepfakes will "enable malicious actors to foment chaos, division, or crisis, and they have the capacity to disrupt entire campaigns, including that for the presidency," Schiff said.
Just as with broader disinformation campaigns, manipulated video and audio content of business leaders can be used to damage brand reputations, alter public perception, and sow confusion about a company. Significantly, as deepfakes proliferate, they will make it easier for people who actually lie and deceive about something to later pass it off as manipulated video.
"Given the increasing reliance on short video segments viewed on multiple social media platforms, it's likely that technologies enabling deepfakes will be increasingly used to position celebrities and political figures in ways that are contrary to their core beliefs," says Chris Hazelton, director of security solutions at Lookout. "A successful attack using a deepfake could cause enough damage to trusted news sources that some audiences will be pushed to less trustworthy or more partisan sources."
The rumors surrounding the ongoing Coronavirus outbreak (Covid-19) is the latest example of how social media platforms have made it easy for bad actors to sow confusion and broadly undermine public confidence during a health crisis or other national emergency.
In addition to the predictable phishing attacks and malware attacks that have taken advantage of the interest around the topic, a flurry of rumors and disinformation surrounding the outbreak has been spread via Facebook, Twitter, and other platforms in recent weeks. The most virulent among them include rumors about the virus actually being a bio-weapon and the death toll being in the tens and even hundreds of thousands worldwide. Now there is some speculation that such rumors led to the near-1,000-point drop in the Dow Jones Industrial Average on February 24.
"Manipulative actors of all types have capitalized on this global health crisis, mobilizing with the goal of spreading fear and panic across social media," Blackbird.AI said in a February 2020 report. The company's analysis of some nearly 7 million tweets from over 2.6 users between February 2 and February 14 found some 2.7 million of them to be manipulative in nature. According to Blackbird.AI, ongoing campaigns around Coronavirus include those aimed at exploiting religious beliefs, spreading health disinformation, fostering xenophobia, and spreading fear. While previous health scares, such as SARS and Swine Flu, have generated disinformation as well, the information ecosystem is now especially well-poised to generate a wealth of online lies and disinformation, Blackbird. AI said in its report. The situation has prompted the World Health Organization to establish EPI-WIN, a new website for countering what it calls "infodemics" -- or campaigns spreading misinformation and disinformation during a health emergency.
Another recent example of bad actors spreading disinformation was the massive wildfires in Australia. People trying to downplay the role of climate change used social media to spread the false narrative of the fires being caused by arsonists.
Services are becoming available in underground criminal markets that allow anyone to launch a disinformation campaign against a targeted entity for a fee. The services allow bad actors to discredit victims while offloading the actual dissemination of online lies to third parties.
An investigation last September by threat intelligence firm Recorded Future found that these services can go to extreme lengths to accomplish a task they have been assigned. This included filing fictitious criminal complaints against companies, setting up individuals in their workplaces, destroying the reputation of a rival, and countering an opponent's disinformation campaign.
By spending around $6,000, researchers from Recorded Future were able to hire two threat actors on a Russian-speaking underground forum and conduct two separate disinformation campaigns against a fictitious company the researchers claimed to have newly established in a Western country. One of the threat actors was assigned to spreading positive news and buzz about the new but nonexistent firm, while the other was tasked with doing exactly the opposite.
In barely two weeks, the threat actor spreading the positive disinformation -- mainly announcing the new company and expounding on its virtues -- was able to place articles in two media outlets. One of them was a less-established outlet and the other a far more established company that had published a newspaper for more than 100 years, according to Recorded Future. The same threat actor also generated a small amount of interest around the fictitious company on social media.
The other threat actor, meanwhile, managed to get articles accusing the fictitious company of manipulating its employees onto a couple of media sites, and then used fake accounts on major social media platforms to amplify the negative news. Fees for placing the fake stories in media outlets ranged from $600 for low-profile sites to over $18,000 on some top tech sites.
"Disinformation service providers have the ability to publish articles in media sources ranging from dubious websites to more reputable news outlets," Recorded Future concluded. "[They] use a combination of both established and new [social media] accounts to propagate content without triggering content moderation controls."
Fact-checking services can play a vital role in combating disinformation. But when disinformation -- or even just true but partisan information -- is spread under the guise of a fact-checking service, that can become a problem.
During a November 2019 election debate between Boris Johnson, the UK's Conservative Prime Minister, and Labor Party leader Jeremy Corbyn, the Conservatives changed the name on its official Twitter account from "CCHQ Press" to "FactCheckUK." The party also replaced the display image/logo on its account with one similar to the checkmark logos used by actual fact checkers. "But instead of true fact-checks, the tweets had a distinct point of view, as would be expected from a political party's press shop," the Poynter Institute said in a report on the incident.
Though the Conservative Party has gone back to using its old Twitter name, the incident highlighted several potentially troubling issues. According to Poynter, any entity positioning itself as a fact-checker when it is not is misleading the public and undermines confidence in real fact checkers. "If people start to believe that fact-checking can come from partisan sources, they no longer have reason to believe it," Poynter said.
Fact verification service Factal points to another similar instance involving Mexican President Andres Manuel Lopez Obrador, who last year launched his own fact-verification unit called "Verificado Notimex." Though the unit is supposedly focused on debunking false news and fact-checking dubious content on local media, its Twitter account name and logo are confusingly similarly to another independent fact-checking service. "While rather blatant, you can expect more nuanced efforts to create biased fact-checking services in the months to come," Factal said.
Fact-checking services can play a vital role in combating disinformation. But when disinformation -- or even just true but partisan information -- is spread under the guise of a fact-checking service, that can become a problem.
During a November 2019 election debate between Boris Johnson, the UK's Conservative Prime Minister, and Labor Party leader Jeremy Corbyn, the Conservatives changed the name on its official Twitter account from "CCHQ Press" to "FactCheckUK." The party also replaced the display image/logo on its account with one similar to the checkmark logos used by actual fact checkers. "But instead of true fact-checks, the tweets had a distinct point of view, as would be expected from a political party's press shop," the Poynter Institute said in a report on the incident.
Though the Conservative Party has gone back to using its old Twitter name, the incident highlighted several potentially troubling issues. According to Poynter, any entity positioning itself as a fact-checker when it is not is misleading the public and undermines confidence in real fact checkers. "If people start to believe that fact-checking can come from partisan sources, they no longer have reason to believe it," Poynter said.
Fact verification service Factal points to another similar instance involving Mexican President Andres Manuel Lopez Obrador, who last year launched his own fact-verification unit called "Verificado Notimex." Though the unit is supposedly focused on debunking false news and fact-checking dubious content on local media, its Twitter account name and logo are confusingly similarly to another independent fact-checking service. "While rather blatant, you can expect more nuanced efforts to create biased fact-checking services in the months to come," Factal said.
Exploding social media use and the growing availability of software bots and tools for manipulating video and other online content have made it easier than ever for bad actors to conduct broad disinformation campaigns.
While many tend to think of these campaigns as being mostly aimed at influencing election outcomes, the reality is that disinformation impacts a lot more than just politics and political leaders.
Recently, governments, hacktivists, and other threat actors have begun using disinformation and propaganda to push various partisan agendas, including those tied to health emergencies like the Coronavirus, religious beliefs, and financial markets. Security experts expect those with malintent to increasingly use disinformation campaigns to try and harm companies' brands and reputations, spread rumors about business leaders, and hurt organizations financially.
"Disinformation is as old as communication. It just happens to take a new form," says Chris Morales, head of security analytics at Vectra. "Fighting disinformation is hard and comes down to what people will and will not believe."
Following are six things to know about disinformation campaigns.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024