Careers & People

7/5/2017
10:30 AM
Tom Pendergast
Tom Pendergast
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Avoiding the Dark Side of AI-Driven Security Awareness

Can artificial intelligence bring an end to countless hours of boring, largely ineffective user training? Or will it lead to a surveillance state within our information infrastructures?

Like many, I'm genuinely excited for the emerging influence of artificial intelligence, or AI. I love it on my smart devices, when I shop, and (in limited form) in my car. But I'm most pumped about what it could bring to the difficult and too often tedious task of educating humans about the risks they pose through their mishandling of information and their exposure of the organization to cybercrime. I'm optimistic that AI may kill old-school security awareness, where we subject an entire employee population to long, boring, required training.

With the right data and intelligent processing, we could place employees within a smart matrix where the very systems they use to interact with information — I'm talking browsers, Outlook, cloud storage — could also provide them with short, individually targeted units of instruction in just the right dose for the risks that they manifest. All it will take is the full integration of smart IT infrastructure with a modular matrix of risk-based content. What could possibly be the problem?

The Problem
"I'll tell you what’s the problem, Tom," my contrarian friend Konrad said when I presented him with my rosy depiction of the future:

"Do you think I really want the computers watching everything I'm doing and then telling me what I need to know, like some nasty old school teacher looking over my shoulder and telling me what I should do to get the answer right? And then that same teacher writes home to my mother to tell her where I've gone wrong or maybe keeps me after school for detention. I'll take death by PowerPoint to living in a surveillance state, thank you very much."

The sobering truth is that if we don't watch out, we will create a surveillance state within our information infrastructure. We will have the capacity to recognize employee's flaws and peculiarities in ways that feel invasive and creepy (though we certainly recognize and identify such problems today already). We will have the capacity to individualize instruction and reminders in ways that may feel like we know what people are thinking about doing before they do it. ("Tempted to click that link Tom? I can see why: it looks much like the legitimate links you often click on. But take a closer look.…" You get the idea).

An (Automated) Helping Hand
Remember Clippy, the first generation of digital "helper" that Microsoft introduced years ago? It didn't work. Clippy tried too hard to be cute and he often didn't know what you needed.

But the next generations of contextual helpers have gotten better and better. I suspect we'll see a natural evolution of AI-driven security awareness training into forms that don't feel much like training at all, but instead feel like useful advice offered in the service of protecting us (and by extension the organization) from mistakes.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

If we do it right (and now I'm situating myself as part of the "we" that will be producing the next generation of content), we'll provide interactions that are amusing and relevant and cued up not just to the mistake you may have been about to make, but also to your own personal preferences for how you like to consume your learning. Maybe my curmudgeonly friend Konrad gets a bulleted to-do list, while humor-loving Zack gets a quick animated cartoon. After all, these same systems that identify your incipient mistakes are also capable of learning your personal preferences and configuring learning experiences that don’t irritate you.

There's a lot to figure out here, and those of us in the security awareness business have only just begun. Experiments in microlearning and reinforcement in a variety of different styles point us in the right direction, but we’re still waiting for behavioral analytics tools to become more widely dispersed and for more people to get wise to the fact that their employees don’t have to endure boring annual training (with its countless wasted hours).

But I do believe that AI-driven security awareness is inevitable, and that we don't really have a choice (Sorry, Konrad!) when it comes to preventing our work systems from understanding us in the same ways that our cars and devices and stores understand us already. Given that we understand the risks correctly, it's up to us to make sure that learning to protect data with the help of AI is the enjoyable experience that it can be — and not the dark prison that some of us fear.

Tom Pendergast, Ph.D., is the chief architect of MediaPro's Adaptive Awareness Framework, a vision of how to analyze, plan, train and reinforce to build a comprehensive awareness program, with the goal of building a risk-aware culture. He is the author or editor of 26 books ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Higher Education: 15 Books to Help Cybersecurity Pros Be Better
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
Worst Password Blunders of 2018 Hit Organizations East and West
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
2019 Attacker Playbook
Ericka Chickowski, Contributing Writer, Dark Reading,  12/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
The Year in Security 2018
This Dark Reading Tech Digest explores the biggest news stories of 2018 that shaped the cybersecurity landscape.
Flash Poll
[Sponsored Content] The State of Encryption and How to Improve It
[Sponsored Content] The State of Encryption and How to Improve It
Encryption and access controls are considered to be the ultimate safeguards to ensure the security and confidentiality of data, which is why they're mandated in so many compliance and regulatory standards. While the cybersecurity market boasts a wide variety of encryption technologies, many data breaches reveal that sensitive and personal data has often been left unencrypted and, therefore, vulnerable.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-6978
PUBLISHED: 2018-12-18
vRealize Operations (7.x before 7.0.0.11287810, 6.7.x before 6.7.0.11286837 and 6.6.x before 6.6.1.11286876) contains a local privilege escalation vulnerability due to improper permissions of support scripts. Admin user of the vROps application with shell access may exploit this issue to elevate the...
CVE-2018-20213
PUBLISHED: 2018-12-18
wbook_addworksheet in workbook.c in libexcel.a in libexcel 0.01 allows attackers to cause a denial of service (SEGV) via a long name. NOTE: this is not a Microsoft product.
CVE-2017-15031
PUBLISHED: 2018-12-18
In all versions of ARM Trusted Firmware up to and including v1.4, not initializing or saving/restoring the PMCR_EL0 register can leak secure world timing information.
CVE-2018-19522
PUBLISHED: 2018-12-18
DriverAgent 2.2015.7.14, which includes DrvAgent64.sys 1.0.0.1, allows a user to send an IOCTL (0x800020F4) with a buffer containing user defined content. The driver's subroutine will execute a wrmsr instruction with the user's buffer for partial input.
CVE-2018-1833
PUBLISHED: 2018-12-18
IBM Event Streams 2018.3.0 could allow a remote attacker to submit an API request with a fake Host request header. An attacker, who has already gained authorised access via the CLI, could exploit this vulnerability to spoof the request header. IBM X-Force ID: 150507.