The UK company had been in business only a few months and was already receiving praise from the press, including an article in one well-known publication. But that seeming good luck didn't last: Within a month, malicious — and false — stories started appearing that said the staffing firm had hired out a woman to work at a strip club.
The company was the victim of a misinformation campaign. Luckily, the business was fake, part of an experiment run by intelligence firm Recorded Future.
To gauge the effectiveness of commercial disinformation campaigns, Recorded Future sought out services to bolster — or undermine — the fictitious company's reputation. In less than a month, and for a total of $6,050, the company hired two Russian services to spread disinformation using a surprisingly extensive online infrastructure, ranging from social media accounts to online writers, to spread information, says Roman Sannikov, director of analyst services at Recorded Future. The list of publications in which the services claimed to be able to place stories ran the gamut from fake news sites to a top international news service.
"Companies need to be hyper-aware of what is being said on social media and really try to address any kind of disinformation when they find it," Sannikov says. "The gist of our research was really how these threat actors use these different types of resources to create an echo chamber of disinformation. And once it gets going, it is much harder to address."
Disinformation has become a major focus in the political arena. In 2018, the US government indicted 13 Russian nationals and three organizations for their efforts — using political advertisements, social media, and e-mail — to sway the 2016 US presidential election.
Yet such campaigns are not just useful in national politics. Disinformation campaigns are enabled and made more efficient by the data collection and capabilities of modern advertising networks. While companies like Cambridge Analytica have pushed the boundaries too far, even the legal abilities of advertising networks can be used to do great harm.
"The targeting models that have allowed advertisers to reach new audiences are being abused by these hucksters that are trying to spread false narratives," says Sean Sposito, senior analyst for cybersecurity at Javelin Strategy & Research. "The advertising industry has built a great infrastructure for targeting, but it's also a great channel to subvert for disinformation."
Disinformation has already harmed companies. In 2018, members of the beauty community revealed that influencers paid to promote a company's products had been paid extra money to criticize competitors' products. The Securities and Exchange Commission (SEC) has filed numerous charges against hedge funds and stock manipulators for taking short positions on particular firms and then spreading false information about the firm. In September 2018, for example, the SEC charged Lemelson Capital Management LLC and its principal, Gregory Lemelson, with such an attack against San Diego-based Ligand Pharmaceuticals.
At the RSA Conference in 2019, Cisco chief security and trust officer John N. Stewart warned that disinformation did not just matter to elections, but to businesses as well. "Disinformation is being used as a tool to influence people—and it’s working," Stewart said.
Even true information, if put within a specific narrative, can harm companies as well. The portrayal of Kaspersky as a firm beholden to Russia and of Chinese technology giant Huawei as a national security risk has had significant impacts on both those companies.
So how can companies prevent disinformation from affecting them in 2020 and beyond? Experts point to three strategies.
(Continued on next page)
Visibility: Know Who's Talking About You
Businesses need to be aware of information targeting their products and services. While much of this capability may be ensconced in product marketing and management groups, infosec teams should be involved as well. Because disinformation can be part of a targeted campaign that includes other type of exploitation, disinformation can be an early sign that a business is being targeted by adversaries, says Mike Wyatt, principal for the cyber risk services and identity management practice at consultancy Deloitte.
"Every company needs a risk-sensing capability," he says. "The goal is that if something does pop up, there is the ability to act quickly."
And attackers can work quickly. In its experiment, Recorded Future found that generating disinformation took only a few days — much less time than establishing a credible presence online for their "legitimate" business. The problem is that the malicious marketing specialists have their infrastructure prepared and ready to spread disinformation quickly.
"The threat actors that we hired were still able to create the profiles and articles within a few days, and these profiles had thousands of followers," says Recorded Future's Sannikov. "So, obviously, the accounts that are used for this are probably a network of existing accounts that they can use to propagate information."
Identity: Create a Trusted and Secure Channel
During a crisis, companies also need a legitimate channel to send out communications to customers and media. Losing control of an official channel can be devastating for a firm. Not only do the attackers get a legitimate channel to use for disinformation, they minimize a business' ability to respond.
For that reason, companies should consider such communications channels to be critical assets to be heavily monitored and secure such accounts with multiple factors.
"There has to be an effort to lock down these accounts so that anyone associated with the company cannot have their accounts compromised and used by the attackers," Deloitte's Wyatt says.
The importance of trusted accounts will become even greater as new technologies make identity much harder to secure and disinformation more convincing. Deep-fake videos, for example, can lend credence to disinformation and lead to immense reputational data. In August, using deep-fake technology, criminals re-created the voice of a UK energy firm's CEO to demand the transfer of £220,000, or about US $290,000, the company's insurance firm told The Wall Street Journal.
Policy and Practice: Game It Out
Where security teams really can make a difference, however, is in the creation of policy and in hosting exercises to practice response, Deloitte's Wyatt says. Thinking about disinformation attacks prior to an actual incident can significantly reduce the damage done.
"There needs to be a crisis response plan," Wyatt says. "We see when an organization does not have a plan in place, the mistakes made because of the lack of thinking of all the facets of an incident can really amplify the damage of the situation, as opposed to minimizing the impact."
Another reason to regularly practice responding to such incidents is because a typical disinformation threat includes a variety of business groups — from security to legal and from public relations to product marketing. Getting each of those groups working together and establishing a playbook before an actual incident is critical.
And those are skills the security team already has, Wyatt says.
"We already do this for cyber-risks," he says. "And we strongly encourage from the C-suite down that everyone be involved in these drills. Because when you do a simulation, you create the pressure that is there in a real event, and they get some muscle memory in how to respond."
- Google Details Its Responses to Cyber Attacks, Disinformation
- Redefining Critical Infrastructure for the Age of Disinformation
- Cognitive Mindhacks: How Attackers Spread Disinformation Campaigns
- Facebook Shuts Hundreds of Russia-Linked Pages, Accounts for Disinformation
- The Future of Account Security: A World Without Passwords?