Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

10/2/2020
05:15 PM
50%
50%

Researchers Adapt AI With Aim to Identify Anonymous Authors

At Black Hat Asia, artificial intelligence and cybersecurity researchers use neural networks to attempt to identify authors, but accuracy is still wanting.

With disinformation on social media a significant problem, the ability to identify authors of malicious articles and the originators of disinformation campaigns could help reduce the threat from such information attacks.  

At the Black Hat Asia 2020 conference this week, three researchers from Baidu Security, the cybersecurity division of the Chinese technology giant Baidu, presented their approach to identifying authors based on machine learning techniques, such as neural networks. The researchers used 130,000 articles by more than 3,600 authors scraped from eight websites to train a neural network that could identify an author from a group of five possible writers 93% of the time and identify an author from a group of 2,000 possible writers 27% of the time.

Related Content:

Project Aims to Unmask Disinformation Bots

State of Endpoint Security: How Enterprises Are Managing Endpoint Security Threats

New on The Edge: CFAA 101: A Computer Fraud & Abuse Act Primer for InfoSec Pros

While the results are not impressive, they do show that identifying the person behind a piece of writing is possible, said Li Yiping, a researcher at Baidu Security, during his presentation on his team's work.

"Most fake news is posted anonymously and lacks valid information to identify the author," he said. "Tracking anonymous articles is a challenging problem, but fortunately it is not impossible. Different people have different writing styles, so we are able to identify some writers by their distinct habits." 

Fake news and other forms of disinformation have become an online plague over the past decade. Driven by commercial success, cybercriminals have used fake news to attract page views against which advertising is sold. More insidious, however, are political disinformation campaigns by foreign nations and domestic groups with agendas that can impact public opinion using untrue information. 

In late September, the FBI and the US Department of Homeland Security issued a warning that both foreign actors and cybercriminals will likely use disinformation in various campaigns this election season.

"Foreign actors and cybercriminals could create new websites, change existing websites, and create or share corresponding social media content to spread false information in an attempt to discredit the electoral process and undermine confidence in U.S. democratic institutions," the agencies stated.

A variety of research efforts are underway, aiming to unmask disinformation campaigns. In May, for example, a group of of researchers at NortonLifelock launched BotSight, a plug-in that rates social media accounts on a bot-versus-human scale. The tool uses the known connections between social media accounts to calculate a probability that a specific account is managed by an automated bot or an actual human.

At the Black Hat USA conference, a research manager at the Stanford Internet Observatory argued that Russia tends to focus more on disinformation campaigns involving fake memes and articles, while Chinese efforts focus more on creating legitimate-seeming news sources that espouse a government-approved focus.

Baidu Security's research effort focused on either matching an article to a known author in a list of sources, called the author attribution problem, or determining the likelihood that an article was written by a specific author, known as the author verification problem. The researcher trained a neural network using a series of triplets of article data: an anchor article written by an author, an article that positively matches the author, and an article that was not written by the author. 

By using a dynamic method of selecting such only a small share of possible triplets, the research team created a training data set to create a neural network that identifies the author of an article. In an experiment using seven datasets of increasingly complexity, the researchers found their method worked well, with 93% accuracy, in attributing any of 600 articles written by five different authors, but was only 27% successful in attributing more than 70,000 documents written by any of 2,000 different authors. 

Researcher Li noted that, even at such a low accuracy with a high number of documents, the Baidu team's approach had better accuracy than other methods.

"Our method outperformed other baselines, especially when the data sets get large," he said. "In the future, we will continue to test our model and optimize our deep learning network and triplet selection strategy."

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
US Formally Attributes SolarWinds Attack to Russian Intelligence Agency
Jai Vijayan, Contributing Writer,  4/15/2021
News
Dependency Problems Increase for Open Source Components
Robert Lemos, Contributing Writer,  4/14/2021
News
FBI Operation Remotely Removes Web Shells From Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/14/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: "Elon, I think our cover's been blown."
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-27400
PUBLISHED: 2021-04-22
HashiCorp Vault and Vault Enterprise Cassandra integrations (storage backend and database secrets engine plugin) did not validate TLS certificates when connecting to Cassandra clusters. Fixed in 1.6.4 and 1.7.1
CVE-2021-29653
PUBLISHED: 2021-04-22
HashiCorp Vault and Vault Enterprise 1.5.1 and newer, under certain circumstances, may exclude revoked but unexpired certificates from the CRL. Fixed in 1.5.8, 1.6.4, and 1.7.1.
CVE-2021-30476
PUBLISHED: 2021-04-22
HashiCorp Terraform’s Vault Provider (terraform-provider-vault) did not correctly configure GCE-type bound labels for Vault’s GCP auth method. Fixed in 2.19.1.
CVE-2021-22540
PUBLISHED: 2021-04-22
Bad validation logic in the Dart SDK versions prior to 2.12.3 allow an attacker to use an XSS attack via DOM clobbering. The validation logic in dart:html for creating DOM nodes from text did not sanitize properly when it came across template tags.
CVE-2021-27736
PUBLISHED: 2021-04-22
FusionAuth fusionauth-samlv2 before 0.5.4 allows XXE attacks via a forged AuthnRequest or LogoutRequest because parseFromBytes uses javax.xml.parsers.DocumentBuilderFactory unsafely.