When Alexa mailed a copy of a couple's conversation to a contact, it raised warning flags for security professionals in organizations.

News this week that an Amazon Echo device had recorded a family's conversation and emailed it to a seemingly random person on their contact list sent a chill among consumers who are adopting these types of Internet of Things devices.

Amazon was able to explain the sequence of events that led to the unfortunate security breach, but many consumers remain skittish about the new voice assistant sitting in their living rooms. Consumers aren't the only ones with a reason to ask questions, however. A growing number of enterprise applications, including SAP and Salesforce.com, have been the target of Echo integration through "skills" - or tasks - that tie Alexa's voice recognition to the application.

According to analysts at Voicebot.ai, in January 2018 there were more than 25,700 skills published in the US. While the vast majority of these are skills for consumer-oriented integration like smart house control, a quick look in the Amazon Alexa Skills Market shows more than 1,000 business skills listed.

"There is a big push by Amazon and other large vendors to incorporate voice assistants into business applications. Voice assistants are a way for vendors to introduce their layer of AI to existing apps and business process," says Chris Morales, head of security analytics at Vectra.

According to Ovum Research, virtual digital assistants will outnumber humans on earth by 2021. Many of them will inevitably join humans in the workplace. As voice assistant use in business is growing, IT security professionals are beginning to pay attention to the devices and their impact on enterprise IT. 

According to Amazon, the Alexa residential data leak came through an almost comical combination of over-sensitive listening device and ignored voice prompts. The consumers spoke strings of sounds that the Echo interpreted as a call to wake up and then various commands, while the humans in the room never heard the Echo's request for confirmation and instruction. Nevertheless, many breaches are built on a foundation of unlikely, yet possible, sequences so the security industry is taking note of the case.

In April, Amazon closed a vulnerability that allowed an Echo to surreptitiously send a transcript of overheard speech to a developer. And in 2017, Google issued a patch for a hardware problem that left a small number of Home Minis constantly recording the speech around them. All of this is interesting, but why should enterprise IT security pros care?

Alexa Goes to Work

A growing number of skills and integrations are being introduced for voice assistants in the office. From Echo integration with Atlassian Build Meister that will allow developers to check on build status with their voice to skills for Slack that let you collaborate with co-workers without ever touching a keyboard, voice assistants are becoming part of many developer and operations offices.

In addition, skills for applications like SAP Concur, Salesforce.com, and Oracle, seem likely to increase voice assistant use beyond the technical teams to employees in various business units with widely differing technology knowledge and skill sets.

With these integrations, one of the concerns some security professionals have is the lack of a direct tie between device and user. "With voice assistants the action or information that is collected needs to audited and tracked to a single user which is must have for enterprise adoption. So effectively we need a strong voice match to a user so that we can associate an action to a user," says Rishi Bhargava, co-founder of Demisto.

That association has more implications for enterprise applications than for most collaboration systems. "The most obvious problem I already see if the lack of voice recognition to a specific user, in particular with Alexa. How do you manage authentication in a conversational interface?" asks Morales.

Vocal Dangers

So what, really, are the dangers of voice assistants in the enterprise? We've seen the possibility of a voice assistant mis-interpreting voice commands (or random words interpreted as voice commands) to record and send information out of the organization. That possibility has already been exploited in demonstrations of exploits that could be used against a company.

Chinese researchers demonstrated that inaudible commands can trigger Siri to act in an exploit they call "Dolphin Attack." This is a specific instance of exploiting a simple fact about the microphones in voice assistants: They can hear a much wider range of frequencies than can humans.

A significant concern comes with the possibility of a headlong rush into voice assistants in the workplace. "Most companies should be cautiously evaluating the use and potential before implementing any voice system into major systems. There needs to be a period of testing and security validation or a business runs the risk of creating a new attack surface they are not prepared to deal with," says Morales.

Bhargava agrees with the idea of cautiously proceeding, but is less optimistic that it will happen. "Security is always an afterthought. This is no different for the voice assistants. In most cases, the adoption will be organic and at some point, the security teams will evaluate and put controls."

One of the greatest conveniences of voice assistants is that they're always there, listening, and ready to respond. So it seems like a paradox to say that one of the greatest security practices is to turn off the microphone. In effect, that means if the individual using the device is leaving for the day, or for an extended period of time, they should turn off the microphone or turn off the device.

So employees should also be made aware, through signage or training, that a listening device is in the office with them. Just as employees have had to be trained to not respond to phishing emails and to follow privacy regulations in communications, the advent of the voice assistant means that IT security has a new area of training to develop and manage for the organization.

Now, if only Alexa could be trained to deliver the classes for them.

Related content:

About the Author(s)

Curtis Franklin, Principal Analyst, Omdia

Curtis Franklin Jr. is Principal Analyst at Omdia, focusing on enterprise security management. Previously, he was senior editor of Dark Reading, editor of Light Reading's Security Now, and executive editor, technology, at InformationWeek, where he was also executive producer of InformationWeek's online radio and podcast episodes

Curtis has been writing about technologies and products in computing and networking since the early 1980s. He has been on staff and contributed to technology-industry publications including BYTE, ComputerWorld, CEO, Enterprise Efficiency, ChannelWeb, Network Computing, InfoWorld, PCWorld, Dark Reading, and ITWorld.com on subjects ranging from mobile enterprise computing to enterprise security and wireless networking.

Curtis is the author of thousands of articles, the co-author of five books, and has been a frequent speaker at computer and networking industry conferences across North America and Europe. His most recent books, Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center, and Securing the Cloud: Security Strategies for the Ubiquitous Data Center, with co-author Brian Chee, are published by Taylor and Francis.

When he's not writing, Curtis is a painter, photographer, cook, and multi-instrumentalist musician. He is active in running, amateur radio (KG4GWA), the MakerFX maker space in Orlando, FL, and is a certified Florida Master Naturalist.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights