Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operations

3/7/2019
05:45 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Twitter, Facebook, NSA Discuss Fight Against Misinformation

RSA panelists address the delicate technical challenges of combating information warfare online without causing First Amendment freedoms to take collateral damage.

RSA CONFERENCE 2019 – San Francisco – Information warfare is often waged on social media, where legitimate consumer communication tools are weaponized by bad actors. As Facebook, Twitter, and National Security Agency (NSA) representatives discussed here today, the battleground is civilian territory, and if the defenders aren't careful, First Amendment freedoms will suffer severe collateral damage.

"So far, America has emerged as one of the clearest losers in this kind of warfare," said panel moderator Ted Schlein, general partner at Kleiner Perkins Caufield & Byers, during the session "The Weaponization of the Internet."

(In another keynote session Wednesday, General Paul Nakasone, commander of US Cyber Command, told CBS News' Olivia Gazis that while Americans saw the Internet as a way for democracy to spread throughout the world, adversaries saw that same possibility as a threat.)

Schlein posed the question of why US intelligence agencies hadn't gotten ahead of threats sooner – threats like disinformation campaigns, voter manipulation, hate speech crimes, and recruitment by terror organizations.

"I think there were efforts, but ... we're trying to shape and react in a place where we're in the middle of speech," said panelist Rob Joyce, senior cybersecurity adviser to the NSA. "We're in a place where, as Americans, we value that First Amendment and the ability to say what I feel, I believe. And getting in the middle and breaking that disruptive speech that can be amplified on these platforms – that's a hard place for America to go."

However, panelist P.W. Singer, senior fellow at New America and author of "LikeWar: The Weaponization of Social Media," suggested that intelligence services, platforms, and politicians were all "looking in the wrong place" for bad actors.

"We were looking, for example, for people hacking Facebook client accounts, not buying ads at scale that over half the American population saw unwittingly," Singer said. "We were looking in the wrong place. They were looking for attackers who exploited Facebook accounts, not ones who bought Facebook ads."

Indeed, attackers are building off some techniques first perfected by marketers. As Twitter VP of trust and safety Del Harvey explained, the first type of manipulation that Twitter discovered was a campaign to convince Justin Bieber to do a tour in Brazil; it was the first example of a strategic effort to create and sustain a trending topic. (Bieber did end up touring in Brazil, she noted.)

"ISIS's top recruiter is mirroring off of [pop star] Taylor Swift and what works for her to win her online battles," Singer said. "Or, in turn, Russian information operations are using the tools created by [Facebook and Twitter] not to market how they were intended but to misuse them to go after American democracy.”

So can the platforms tackle the malicious use problem by simply scanning tweets for ISIS recruitment videos and Russian propaganda (and ignoring Taylor Swift)? Not necessarily.

"Content is actually one of the weaker signals' of a bad actor," Twitter's Harvey explained.

Content might not be shared for many reasons: Terrorist recruitment propaganda might be shared as part of a news report on that terrorist organization, for example. Conversely, Harvey said, "There are certain behaviors that you can identify as being attempts at manipulation." 

For example, a user may be part of a network of accounts pushing the same messaging. These accounts are also related by IP address and carrier. They may be targeting certain networks or trying to social engineer their way into a trusted group.

This behavior of a manipulator is actually quite dissimilar to that of the community-native true believer who shares the same content, Harvey said.  

NSA's Joyce says behavior is connected in some way to three main categories of an account: "The content itself, which we all agree is the most troublesome and the hardest to deal with. And then there's an identity; it may be real, it may be assumed. And then there's amplification.”

Panelist Nathaniel Gleicher, Facebook's head of cybersecurity policy, added that whenever there is a public discussion up for debate, bad actors will target that debate. The challenge is stopping the bad actors without stopping the debate.

"The way you make progress in the security world is you identify ways to impose more friction on the bad actors and the behaviors that they’re using, without simultaneously imposing friction on a meaningful public discussion," he said. "That's an incredibly hard balance." 

Facebook approaches this challenge, Gleicher said, with a combination of automated tools and human investigators, who look for the most sophisticated bad actors, identify their core behaviors, and develop ways to automatically make those behaviors more difficult to commit at scale.

Because regulating content is problematic, they may tackle the issues of identity and amplification instead – such as changing the way ads are purchased on Facebook and making it more difficult to create fake accounts or bots.

"None of this means that we shouldn't be taking action on content that clearly violates our policies," Gleischer noted. "The challenge is, the majority of the content we see in information operations doesn't violate our policies. It's not clearly hate speech, and it's potentially framed to not fit into that bucket. And it's not provably false. There's a lot that fits into that gray space."

Twitter's Harvey noted that the conversation of "bots" has become so pervasive that it has begun to have a cultural impact on regular human discourse.  

"It is amazing the number of times you will see two people who get in an argument and one of them decides to end it by just saying, 'Well, you're just a bot [when] it is demonstrably not a bot," she said.

Pasting the label of "bot" on anyone with a differing opinion is being used as "an exit path from conflict, from disagreement," Harvey added. "In fact, you're a Russian bot. And you are here to try to sway my mind on the topic of local football teams."

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
News
Inside the Ransomware Campaigns Targeting Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/2/2021
Commentary
Beyond MITRE ATT&CK: The Case for a New Cyber Kill Chain
Rik Turner, Principal Analyst, Infrastructure Solutions, Omdia,  3/30/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-20491
PUBLISHED: 2021-04-16
IBM Spectrum Protect Server 7.1 and 8.1 is subject to a stack-based buffer overflow caused by improper bounds checking during the parsing of commands. By issuing such a command with an improper parameter, an authorized administrator could overflow a buffer and cause the server to crash. IBM X-Force ...
CVE-2021-22539
PUBLISHED: 2021-04-16
An attacker can place a crafted JSON config file into the project folder pointing to a custom executable. VScode-bazel allows the workspace path to lint *.bzl files to be set via this config file. As such the attacker is able to execute any executable on the system through vscode-bazel. We recommend...
CVE-2021-31414
PUBLISHED: 2021-04-16
The unofficial vscode-rpm-spec extension before 0.3.2 for Visual Studio Code allows remote code execution via a crafted workspace configuration.
CVE-2021-26073
PUBLISHED: 2021-04-16
Broken Authentication in Atlassian Connect Express (ACE) from version 3.0.2 before version 6.6.0: Atlassian Connect Express is a Node.js package for building Atlassian Connect apps. Authentication between Atlassian products and the Atlassian Connect Express app occurs with a server-to-server JWT or ...
CVE-2021-26074
PUBLISHED: 2021-04-16
Broken Authentication in Atlassian Connect Spring Boot (ACSB) from version 1.1.0 before version 2.1.3: Atlassian Connect Spring Boot is a Java Spring Boot package for building Atlassian Connect apps. Authentication between Atlassian products and the Atlassian Connect Spring Boot app occurs with a se...