RSA panelists address the delicate technical challenges of combating information warfare online without causing First Amendment freedoms to take collateral damage.

Sara Peters, Senior Editor

March 7, 2019

5 Min Read

RSA CONFERENCE 2019 – San Francisco – Information warfare is often waged on social media, where legitimate consumer communication tools are weaponized by bad actors. As Facebook, Twitter, and National Security Agency (NSA) representatives discussed here today, the battleground is civilian territory, and if the defenders aren't careful, First Amendment freedoms will suffer severe collateral damage.

"So far, America has emerged as one of the clearest losers in this kind of warfare," said panel moderator Ted Schlein, general partner at Kleiner Perkins Caufield & Byers, during the session "The Weaponization of the Internet."

(In another keynote session Wednesday, General Paul Nakasone, commander of US Cyber Command, told CBS News' Olivia Gazis that while Americans saw the Internet as a way for democracy to spread throughout the world, adversaries saw that same possibility as a threat.)

Schlein posed the question of why US intelligence agencies hadn't gotten ahead of threats sooner – threats like disinformation campaigns, voter manipulation, hate speech crimes, and recruitment by terror organizations.

"I think there were efforts, but ... we're trying to shape and react in a place where we're in the middle of speech," said panelist Rob Joyce, senior cybersecurity adviser to the NSA. "We're in a place where, as Americans, we value that First Amendment and the ability to say what I feel, I believe. And getting in the middle and breaking that disruptive speech that can be amplified on these platforms – that's a hard place for America to go."

However, panelist P.W. Singer, senior fellow at New America and author of "LikeWar: The Weaponization of Social Media," suggested that intelligence services, platforms, and politicians were all "looking in the wrong place" for bad actors.

"We were looking, for example, for people hacking Facebook client accounts, not buying ads at scale that over half the American population saw unwittingly," Singer said. "We were looking in the wrong place. They were looking for attackers who exploited Facebook accounts, not ones who bought Facebook ads."

Indeed, attackers are building off some techniques first perfected by marketers. As Twitter VP of trust and safety Del Harvey explained, the first type of manipulation that Twitter discovered was a campaign to convince Justin Bieber to do a tour in Brazil; it was the first example of a strategic effort to create and sustain a trending topic. (Bieber did end up touring in Brazil, she noted.)

"ISIS's top recruiter is mirroring off of [pop star] Taylor Swift and what works for her to win her online battles," Singer said. "Or, in turn, Russian information operations are using the tools created by [Facebook and Twitter] not to market how they were intended but to misuse them to go after American democracy.”

So can the platforms tackle the malicious use problem by simply scanning tweets for ISIS recruitment videos and Russian propaganda (and ignoring Taylor Swift)? Not necessarily.

"Content is actually one of the weaker signals' of a bad actor," Twitter's Harvey explained.

Content might not be shared for many reasons: Terrorist recruitment propaganda might be shared as part of a news report on that terrorist organization, for example. Conversely, Harvey said, "There are certain behaviors that you can identify as being attempts at manipulation." 

For example, a user may be part of a network of accounts pushing the same messaging. These accounts are also related by IP address and carrier. They may be targeting certain networks or trying to social engineer their way into a trusted group.

This behavior of a manipulator is actually quite dissimilar to that of the community-native true believer who shares the same content, Harvey said.  

NSA's Joyce says behavior is connected in some way to three main categories of an account: "The content itself, which we all agree is the most troublesome and the hardest to deal with. And then there's an identity; it may be real, it may be assumed. And then there's amplification.”

Panelist Nathaniel Gleicher, Facebook's head of cybersecurity policy, added that whenever there is a public discussion up for debate, bad actors will target that debate. The challenge is stopping the bad actors without stopping the debate.

"The way you make progress in the security world is you identify ways to impose more friction on the bad actors and the behaviors that they’re using, without simultaneously imposing friction on a meaningful public discussion," he said. "That's an incredibly hard balance." 

Facebook approaches this challenge, Gleicher said, with a combination of automated tools and human investigators, who look for the most sophisticated bad actors, identify their core behaviors, and develop ways to automatically make those behaviors more difficult to commit at scale.

Because regulating content is problematic, they may tackle the issues of identity and amplification instead – such as changing the way ads are purchased on Facebook and making it more difficult to create fake accounts or bots.

"None of this means that we shouldn't be taking action on content that clearly violates our policies," Gleischer noted. "The challenge is, the majority of the content we see in information operations doesn't violate our policies. It's not clearly hate speech, and it's potentially framed to not fit into that bucket. And it's not provably false. There's a lot that fits into that gray space."

Twitter's Harvey noted that the conversation of "bots" has become so pervasive that it has begun to have a cultural impact on regular human discourse.  

"It is amazing the number of times you will see two people who get in an argument and one of them decides to end it by just saying, 'Well, you're just a bot [when] it is demonstrably not a bot," she said.

Pasting the label of "bot" on anyone with a differing opinion is being used as "an exit path from conflict, from disagreement," Harvey added. "In fact, you're a Russian bot. And you are here to try to sway my mind on the topic of local football teams."

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

About the Author(s)

Sara Peters

Senior Editor

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad of other topics. She authored the 2009 CSI Computer Crime and Security Survey and founded the CSI Working Group on Web Security Research Law -- a collaborative project that investigated the dichotomy between laws regulating software vulnerability disclosure and those regulating Web vulnerability disclosure.


Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights