Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

7/9/2021
02:59 PM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

New Framework Aims to Describe & Address Complex Social Engineering Attacks

As attackers use more synthetic media in social engineering campaigns, a new framework is built to describe threats and provide countermeasures.

Deepfake and related synthetic media technologies have helped attackers develop ever-more-realistic social engineering attacks in recent years, putting pressure on defenders to change the strategies they use to detect and address them.

Related Content:

Researchers Learn From Nation-State Attackers' OpSec Mistakes

Special Report: Building the SOC of the Future

New From The Edge: The NSA's 'New' Mission: Get More Public With the Private Sector

The FBI warned synthetic media will play a greater role in cyberattacks in March, when officials predicted "malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months." Some criminals have already started: in 2019, attackers used artificial intelligence-based software to impersonate the voice of a chief executive and in doing so, facilitate a transfer of $243,000 USD from the target organization.

While deepfake videos garner the most media attention, this case demonstrates how synthetic media goes far beyond these. The FBI defines synthetic content as a "broad spectrum of generated or manipulated digital content" that includes images, video, audio, and text. Attackers can use common software like Photoshop to create synthetic content; however, more advanced tactics use AI and machine learning technologies to help distribute false content.

Matthew Canham, CEO of Beyond Layer 7, has researched remote online social engineering attacks for the past four to five years. His goal is to better understand the human element behind these campaigns: how humans are vulnerable and what makes us more or less susceptible to these kinds of attacks. Ultimately, the research led to a framework that Canham hopes will help researchers and defenders better describe and address these kinds of attacks.

His first experience with synthetic media-enabled social engineering involved gift card scams using bot technology. The first few interactions of these attacks "were almost identical, and you could tell they were being scripted," Canham says. After some conversation, when they got a person to respond, they would pivot to person-to-person interaction to carry out the attack.

"The significance of this is that it allows the attackers to scale these attacks in ways they weren't able to previously," he explains. When they shifted from scripted chats to live ones, Canham noticed "a very dramatic change in tone," a sign the fraudsters were well-practiced and knew how to push people's buttons.

While today's defenders have access to technology-based methods for detecting synthetic media, attackers are constantly evolving to defeat the most modern defense mechanisms.

"Because of that you have … an arms race situation, in which there's never really parity between the two groups," Canham explains. "There's always sort of an advantage that slides dynamically between the two."

Another issue, he adds, is that many technologically based platforms are based on datasets that don't have deliberate anti-forensic countermeasures built in. This is an important point, because attackers often try to defeat defensive systems by injecting code into deepfakes and synthetic media that will help them circumvent filters and other types of defense mechanisms.

And finally, while today's technology is constantly improving, it's not always readily available to the average user and remains difficult to apply in real time. Many victims, even if they recognize a synthetic media attack, may not know which steps they should take to mitigate it.

A Human-Centric Approach
Given these difficulties, Canham is focused on human-centered countermeasures for synthetic media social engineering attacks. He proposes a Synthetic Media Social Engineering framework to describe these types of attacks and offer countermeasures that are easier to implement.

The framework spans five dimensions that apply to an attack: Medium (text, audio, video, or a combination), Interactivity (whether it's pre-recorded, asynchronous, or in real-time), Control (human puppeteer, software, or hybrid), Familiarity (unfamiliar, familiar, or close), and Intended Target (human or automation, individual target, or broader audience).

Familiarity is a component that he calls "a game-changing aspect of synthetic media," and it refers to the victim's relationship with the synthetic "puppet." An attacker might take on the appearance or sound of someone familiar, such as a friend or family member, in a "virtual kidnapping" attack in which they threaten harm to someone the victim knows. Alternatively, they could pretend to be someone the victim has never met – a common tactic in catfishing and romance scams, Canham says.

Behavior-focused methods for describing these attacks can help people spot inconsistencies between the actions of a legitimate person and those of an attacker. Proof-of-life statements, for example, can help prevent someone from falling for a virtual kidnapping attack.

He hopes the framework will become a useful tool for researchers by providing a taxonomy of attacks and a common language they can use to discuss synthetic media. For security practitioners, it could be a tool for anticipating attacks and doing threat modeling, he says.

[Canham will discuss the framework's dimensions in his upcoming Black Hat USA briefing, "Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering," on Aug. 4 and 5.]

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Enterprises are Attacking the Cybersecurity Problem
Concerns over supply chain vulnerabilities and attack visibility drove some significant changes in enterprise cybersecurity strategies over the past year. Dark Reading's 2021 Strategic Security Survey showed that many organizations are staying the course regarding the use of a mix of attack prevention and threat detection technologies and practices for dealing with cyber threats.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-3454
PUBLISHED: 2021-10-19
Truncated L2CAP K-frame causes assertion failure. Zephyr versions >= 2.4.0, >= v.2.50 contain Improper Handling of Length Parameter Inconsistency (CWE-130), Reachable Assertion (CWE-617). For more information, see https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-fx88-6c29-...
CVE-2021-3455
PUBLISHED: 2021-10-19
Disconnecting L2CAP channel right after invalid ATT request leads freeze. Zephyr versions >= 2.4.0, >= 2.5.0 contain Use After Free (CWE-416). For more information, see https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-7g38-3x9v-v7vp
CVE-2021-41150
PUBLISHED: 2021-10-19
Tough provides a set of Rust libraries and tools for using and generating the update framework (TUF) repositories. The tough library, prior to 0.12.0, does not properly sanitize delegated role names when caching a repository, or when loading a repository from the filesystem. When the repository is c...
CVE-2021-31378
PUBLISHED: 2021-10-19
In broadband environments, including but not limited to Enhanced Subscriber Management, (CHAP, PPP, DHCP, etc.), on Juniper Networks Junos OS devices where RADIUS servers are configured for managing subscriber access and a subscriber is logged in and then requests to logout, the subscriber may be fo...
CVE-2021-31379
PUBLISHED: 2021-10-19
An Incorrect Behavior Order vulnerability in the MAP-E automatic tunneling mechanism of Juniper Networks Junos OS allows an attacker to send certain malformed IPv4 or IPv6 packets to cause a Denial of Service (DoS) to the PFE on the device which is disabled as a result of the processing of these pac...