Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

3/15/2018
10:30 AM
Menny Barzilay
Menny Barzilay
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Voice-Operated Devices, Enterprise Security & the 'Big Truck' Attack

The problem with having smart speakers and digital assistants in the workplace is akin to having a secure computer inside your office while its wireless keyboard is left outside for everyone to use.

Let's welcome the new members to the cybersecurity threat landscape, ladies and gentleman, a big round of applause for ... sensors! As you undoubtedly know, the Internet of things (IoT) is enabled by sensors, allowing smart devices to respond to their environment by registering voices, movements, temperature changes, smells, and more.  

Sensors also introduce new cybersecurity challenges, not the least of which stem from voice-operated devices, smart speakers, and digital assistants such as Amazon Echo with its accompanying Alexa Voice Service (nicknamed "Alexa"). Though most voice-operated devices are considered primarily to be consumer products, these devices eventually will reach the corporate world (if they have not already), where they will present unique challenges when connected to corporate networks holding sensitive data.  

The "Big Truck" Attack
Imagine the following scenario: Take a big truck. (Yes, an actual physical truck.) Load it with huge speakers. Set the volume to maximum. Drive around New York, Berlin, London, or any other big city. Play a recording with various dangerous voice commands for Alexa (or any other voice-activated device). Sit back and watch the world burn.

Since you can use Alexa to do many things such as write emails, access data, and operate other smart devices, the ability to control it remotely could potentially cause data leakages, disruption of processes, and data integrity problems.

The Vocal Perimeter
By this point, I assume that you have guessed one of my two main points. Up until now, restricting access to sensitive systems by using physical means was, more or less, an easy job. Our offices have walls, locks, and security guards. With voice-operated sensors, it is not always possible to limit access through traditional security measures. Think of it as having a secure computer inside your office and its wireless keyboard outside for everyone to use.

I experienced this phenomenon firsthand when I gave a television interview about Alexa and privacy some time ago. After the interview, several people called me and told me that each time I said "Alexa" on TV, their devices entered the "listening" mode. That was an "aha moment" for me. My ability to control people's smart devices through the TV amazed me. After a while, it started happening to others as well. You might also have heard about the "dollhouse case" or the Burger King ad (which plays after a YouTube ad).

What Doesn't Work?
Biometric authentication, for one, doesn't solve the problem. In theory, Alexa could learn to identify authorized people's voices and listen only to the commands they give. But while this seems like a possible solution, the opposite is actually true. To begin with, there is an inherent trade-off between usability and security. Implementing such a system means that users would have to go through an onboarding process to teach Alexa or any other voice-enabled device how they sound. Compared to the status quo, where Alexa works out of the box, we are talking about a serious degradation in user comfortability.

Biometric identification also means false positives: if your voice sounds different because you are sick, sleepy, or eating, Alexa will probably not accept you as an authorized user. And this is not all — there are systems available (like this example of Adobe VoCo) that, by using a person's voice sample saying one thing, can generate a new sample of his voice saying another thing.

Haven't We Solved this Problem?
Yes, we faced similar challenges with Wi-Fi networks in the corporate world. While these networks are also not limited by physical walls, the use of encryption and passwords proved to be a straightforward solution, separating approved from unapproved users.

It is true that we could force password usage with voice-operated devices ("Alexa, password 1337, please turn off the lights.") But … in the cybersecurity domain, saying the password out loud is not considered to be the most secure method for authentication. Another possible solution would be changing the activation word for voice-operated devices. Instead of calling Alexa "Alexa," you would choose a unique name. This will dramatically reduce our ability to execute The Big Truck Attack. But you'll be forced to say the new name out loud every time you operate a device, preventing it from becoming a strong security measure.

While for some "home users" this risk might be acceptable, it will not pass muster on the corporate side. Worse, in many cases, it would be extremely dangerous to connect voice-operated devices (as well as other types of sensor-operated devices) to sensitive networks — and one should refrain from doing so.

Mission Not Impossible
One possible solution is taking a multidevice approach. In this scenario, several devices would be able to identify approved users simultaneously, dramatically improving security. For example, when Alexa hears a user speaks, she will "ask" his smartwatch for identification confirmation. The smartwatch, being able to "hear" him/her through the voice vibrations inside their body, would match Alexa's received command with the one she just heard. If both match, this can be considered a two-step authentication.

A similar scenario can be achieved with video cameras, matching face and mouth movements to the commands Alexa hears. The camera could tell Alexa, "Yes, I know this guy. He is cool." Still, in any case, we are facing a complicated situation that requires extensive research. Voice identification may solve some of the issues for home users, but it is still far from being suitable for highly sensitive corporate networks.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here.

Menny  Barzilay is a strategic adviser to leading enterprises worldwide as well as states and governments, and he also sits on the advisory boards of several startup companies. Menny is the CEO of Cytactic, a cybersecurity services company, and the founder of the ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
SchemaCzar
100%
0%
SchemaCzar,
User Rank: Strategist
3/16/2018 | 10:54:15 AM
IMPORTANT ARTICLE! Always-on voice recognition is a technology ahead of its safety
Anyone who doesn't treat Alexa or other voice-activated technology as the listening, and possibly spying, devices that they are is crazy.  These devices have nowhere near the precautions they need to keep their users safe.  It's not impossible to consider ways in which they could be made safe, but as the article makes clear, the technology to enforce those approaches is not deployed and probably not mature.

If I found one of them anywhere near an executive office, I would read them the riot act.
Firms Improve Threat Detection but Face Increasingly Disruptive Attacks
Robert Lemos, Contributing Writer,  2/20/2020
Ransomware Damage Hit $11.5B in 2019
Dark Reading Staff 2/20/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-17274
PUBLISHED: 2020-02-26
NetApp FAS 8300/8700 and AFF A400 Baseboard Management Controller (BMC) firmware versions 13.x prior to 13.1P1 were shipped with a default account enabled that could allow unauthorized arbitrary command execution via local access.
CVE-2019-17275
PUBLISHED: 2020-02-26
OnCommand Cloud Manager versions prior to 3.8.0 are susceptible to arbitrary code execution by remote attackers.
CVE-2020-3169
PUBLISHED: 2020-02-26
A vulnerability in the CLI of Cisco FXOS Software could allow an authenticated, local attacker to execute arbitrary commands on the underlying Linux operating system with a privilege level of root on an affected device. The vulnerability is due to insufficient validation of arguments passed to a spe...
CVE-2020-3170
PUBLISHED: 2020-02-26
A vulnerability in the NX-API feature of Cisco NX-OS Software could allow an unauthenticated, remote attacker to cause an NX-API system process to unexpectedly restart. The vulnerability is due to incorrect validation of the HTTP header of a request that is sent to the NX-API. An attacker could expl...
CVE-2020-3171
PUBLISHED: 2020-02-26
A vulnerability in the local management (local-mgmt) CLI of Cisco FXOS Software and Cisco UCS Manager Software could allow an authenticated, local attacker to execute arbitrary commands on the underlying operating system (OS) of an affected device. The vulnerability is due to insufficient input vali...