A Warning for Wearables: Think Before You Emote
An examination of how wearable devices could become the modern equivalent of blogs broadcasting proprietary workplace information directly to the Internet of Things -- and beyond.
March 8, 2016
In 2013, I wrote an article for The Guardian about a woman who owned a wearable device that measured her stress at work. After realizing her anxiety was spiking every day at the same time her oppressive manager checked in at her cubicle, she began tallying her aggregate physiological data for a month. After commiserating with two other colleagues suffering similar tensions with the same manager, all three employees took their data to their boss. Presenting quantified proof to the CEO (time-stamped data correlating to an increase of stress), the employees demanded the negative manager get fired before their health insurance premiums increased.
Put simply, and literally, they also noted: “He’s killing us.”
It’s easy to ignore the intimate nature of personal data. We’ve been trained to give it away by signing terms and conditions agreements to utilize services where we aren’t fully aware of how our data will be utilized or shared. We’ve also been trained to share our data with friends via social media, a practice that is typically reinforced by most wearable manufacturers. Whether you’re letting the world see progress about your fitness or how you slept last night, it’s dead simple to set permissions that will broadcast aspects of your inner life you’ve never been able to reveal before.
This same logic also applies to employees, as demonstrated in the following scenario.
Death by Data
Recently appointed EVP of Social Media for his top-ten PR firm – let’s call him Tom Delancey – assumed he'd been called to see his CEO for a holiday bonus. Having secured a choice article in Fast Company describing the company's forward-thinking approach to wearable devices and innovation, Tom assumed CEO Cheryl would be praising him for positioning the firm as a market leader to their clients. But upon closing the door to her swanky 30th floor corner office, Tom was in for quite a shock:
“You’re fired, Tom. In your Fast Company article you mentioned your innovation meetings with our biggest client happen every week on Thursdays during lunch. One of our competitors went on LinkedIn, identified everyone on your marketing team and their Twitter handles, and followed every tweet generated by their wearable devices. Using a pretty simple algorithm, they were able to correlate what the increase of people's heart rates and other data meant in terms of their mood. Apparently during last week's session something pretty bad happened near the end of the meeting, because everyone's data registered a spike in negative emotion.”
Tom's jaw dropped as his stress-sensing watch registered a massive increase in tension. He gasped as Cheryl turned her laptop on her desk so he could read an Ad Age article headline written in large type: “Delancey Debunked: Our New Client Finds the Off Switch for Quantified Employees.”
“Our new client?” asked Tom. “You mean...”
“Correct,"” Cheryl interrupted. “Our biggest client just fired our agency because you unintentionally broadcast the emotional and quantified data of your team. They didn’t have to say a word. Their data essentially said our client's new product sucks.”
Data by Design
While this vignette may seem futuristic, devices exist today that are designed to read your brainwaves to control objects with your thoughts. So, for example, an employee forgetting to switch his mental settings from public to private could conceivably tweet a negative thought about a client. Companion robots using affective computing technology that analyze and influence human emotions could also start broadcasting our moods at work.
Just think how easy it might be for your office photocopier to post on Medium after you aggressively punched the “print” button multiple times before a big meeting. More likely, after your fingerprint is matched to your actions, the copy machine may determine that your anger issues are negatively affecting the office and you’ll be let go.
Though the example I’ve cited may seem far-fetched, it represents an emerging and important privacy issue centered around how employees share data publicly with their IoT devices in and out of the workplace. In the same way companies learned to set social media policies to guide employees (e.g. saying, “tweets are my own” on Twitter), organizations need to set similar regulations regarding wearables.
In some ways, this boils down to the human dimension of risk-based security, an area that will be addressed at the upcoming Rock Stars of Risk-Based Security technology event in Washington DC next month. Since security is fundamentally a human-to-human conflict, understanding users, attackers, and defenders is core to containing and minimizing the threats our Internet-based society faces.
It’s also important to start slow when building employee wellness or other programs utilizing quantified self tools, as Ken Favaro and Ramesh Nair point out in their excellent article, The Quantified Self Goes Corporate. Rather than focus on quick hits or flashy results like my fictional Tom Delancey, the authors provide a great description of what they call, the “quantified core; it is the enterprise equivalent of the ‘quantified self’ movement, the tracking of individuals’ health and daily life patterns for the sake of improving both.” This process demands buy-in from the C-suite with a broad understanding of what it means to improve employee well-being, including physical, emotional, and cultural sensitivities at any program’s core.
Wearable data devices are the modern equivalent of blogs broadcasting directly to the Internet of Things. This is a good analogy to frame your policies regarding how employees utilize their quantified tools in the workplace. While they may not realize their data could be interpreted as inappropriate or breaking corporate confidences, unless they’ve updated their settings accordingly, that choice is not theirs to make.
Related Content:
Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200.
About the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024