Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

It won't be long before we consider embodied AI as a form of "life" — and that will have a variety of paradigm-shifting, somewhat irritating, and potentially hilarious impacts on the daily lives of infosec and privacy pros.

Sara Peters, Senior Editor

August 27, 2019

4 Min Read

As though prioritizing patches isn't hard enough, how much worse will it be when the unpatched machine can stalk over to your desk, fold its arms, raise an eyebrow, and ask why its vulnerability is still waiting for a fix? 

Right now, artificial intelligence (AI) is just a tool — a tool we're barely using — but science-fiction always has its way. We already carry the "Hitchiker's Guide to the Galaxy" in our pockets; soon enough we'll be throwing build-day parties for our robot co-workers.

And it won't be long before we consider embodied AI as a form of "life." Robots will be granted certain rights and held to certain responsibilities. And that will have a variety of paradigm-shifting, somewhat irritating, and potentially hilarious impacts on the daily lives of cybersecurity and privacy professionals.

'Alive'? Really?
When trying to define "life," scientists use qualifications such as autonomy, a need for energy, or an ability to replicate, make decisions, and adapt to an environment. An embodied, self-replicating neural network that uses electricity, performs automated functions, and learns from its mistakes is certainly well on its way to fulfilling these requirements.  

You can quibble over how much of this is truly "autonomous” and how much is "programmed," but really you'd just be retreading the same "nature vs. nurture” territory that sociologists have trod for years: How much of what we do is a product of how we're built, and how much is a product of what we're taught?

Regardless, humans are likely to imbue certain embodied robots with the "concept of "life." Example: In 1984, tragedy struck, right in the middle of Saturday morning cartoons. Rosie, The Jetsons' sassy robot housekeeper, swallowed a faulty lugnut, turning the orderly Rosie into an out-of-control shoplifter. But did the Jetsons reboot, reformat, or replace the used basic economy model robot? No. The family planned an intervention.

"Now, we've got to handle this with sympathy and understanding," said Jane. "She may need professional help," said George. And once her hardware was completely wrecked, the whole family huddled in the robot hospital anxiously, while the robot surgeons lamented,"Oh my, this is an old one. How will we ever find original parts?" 

Good news: Rosie came out OK.       

But it seemed perfectly natural to worry about Rosie's well-being, just like we do Baymax and Wall-E and L3-37. And just like we apologize for speaking too harshly to Siri, Alexa, and Garmin.

As AI and robotics grow ever more sophisticated, people will feel the same about the robot bear that cares for their elderly parents, the robotic household assistant who helps them in the kitchen and mopes if it's ignored, the realistic sex doll they pose with in vacation photos, and perhaps one day Unit 224 ("Tootsie," to her friends), the malware detection and removal specialist. 

The Impact on InfoSec 
So what does that mean to the security team? 

• Software companies will need to rethink backward compatibility: A robot's right to life will mean that unless Apple wants Amnesty International on its case, it won't be able to wantonly discontinue programs and remove phone jacks. And if Microsoft thought the rage over ending support for Windows XP was bad, it has no idea what might come next. 

• Patch management will be less risk-based: Low-priority bugs on the CEO's personal assistant bot could suddenly be deemed critical, while a truly critical zero-day remote code execution vulnerability has to wait.

• Cyber insurance will be workman's comp: If a machine is "injured" on the job, you want to be covered. No one wants to think about going up against an AI-powered legal team. 

• Ransomware takes on a new meaning: The stakes change when ransomware operators are not just holding data systems for ransom, but lives. 

• Robots will need a TON of GDPR training: AI systems are sure to be handling heavy amounts of data. Either they or their human overseers will be held responsible for privacy violations.  

• No more skills shortage? Some of those vacant security jobs might finally be filled, and infosec pros might get to do some of that threat hunting they never have time to do — unless of course the robots are better at threat hunting. Hmmm.

Related Content:

(Image Source: Adobe Stock)

 

About the Author(s)

Sara Peters

Senior Editor

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad of other topics. She authored the 2009 CSI Computer Crime and Security Survey and founded the CSI Working Group on Web Security Research Law -- a collaborative project that investigated the dichotomy between laws regulating software vulnerability disclosure and those regulating Web vulnerability disclosure.


Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights