Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

7/5/2017
10:30 AM
Tom Pendergast
Tom Pendergast
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Avoiding the Dark Side of AI-Driven Security Awareness

Can artificial intelligence bring an end to countless hours of boring, largely ineffective user training? Or will it lead to a surveillance state within our information infrastructures?

Like many, I'm genuinely excited for the emerging influence of artificial intelligence, or AI. I love it on my smart devices, when I shop, and (in limited form) in my car. But I'm most pumped about what it could bring to the difficult and too often tedious task of educating humans about the risks they pose through their mishandling of information and their exposure of the organization to cybercrime. I'm optimistic that AI may kill old-school security awareness, where we subject an entire employee population to long, boring, required training.

With the right data and intelligent processing, we could place employees within a smart matrix where the very systems they use to interact with information — I'm talking browsers, Outlook, cloud storage — could also provide them with short, individually targeted units of instruction in just the right dose for the risks that they manifest. All it will take is the full integration of smart IT infrastructure with a modular matrix of risk-based content. What could possibly be the problem?

The Problem
"I'll tell you what’s the problem, Tom," my contrarian friend Konrad said when I presented him with my rosy depiction of the future:

"Do you think I really want the computers watching everything I'm doing and then telling me what I need to know, like some nasty old school teacher looking over my shoulder and telling me what I should do to get the answer right? And then that same teacher writes home to my mother to tell her where I've gone wrong or maybe keeps me after school for detention. I'll take death by PowerPoint to living in a surveillance state, thank you very much."

The sobering truth is that if we don't watch out, we will create a surveillance state within our information infrastructure. We will have the capacity to recognize employee's flaws and peculiarities in ways that feel invasive and creepy (though we certainly recognize and identify such problems today already). We will have the capacity to individualize instruction and reminders in ways that may feel like we know what people are thinking about doing before they do it. ("Tempted to click that link Tom? I can see why: it looks much like the legitimate links you often click on. But take a closer look.…" You get the idea).

An (Automated) Helping Hand
Remember Clippy, the first generation of digital "helper" that Microsoft introduced years ago? It didn't work. Clippy tried too hard to be cute and he often didn't know what you needed.

But the next generations of contextual helpers have gotten better and better. I suspect we'll see a natural evolution of AI-driven security awareness training into forms that don't feel much like training at all, but instead feel like useful advice offered in the service of protecting us (and by extension the organization) from mistakes.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

If we do it right (and now I'm situating myself as part of the "we" that will be producing the next generation of content), we'll provide interactions that are amusing and relevant and cued up not just to the mistake you may have been about to make, but also to your own personal preferences for how you like to consume your learning. Maybe my curmudgeonly friend Konrad gets a bulleted to-do list, while humor-loving Zack gets a quick animated cartoon. After all, these same systems that identify your incipient mistakes are also capable of learning your personal preferences and configuring learning experiences that don’t irritate you.

There's a lot to figure out here, and those of us in the security awareness business have only just begun. Experiments in microlearning and reinforcement in a variety of different styles point us in the right direction, but we’re still waiting for behavioral analytics tools to become more widely dispersed and for more people to get wise to the fact that their employees don’t have to endure boring annual training (with its countless wasted hours).

But I do believe that AI-driven security awareness is inevitable, and that we don't really have a choice (Sorry, Konrad!) when it comes to preventing our work systems from understanding us in the same ways that our cars and devices and stores understand us already. Given that we understand the risks correctly, it's up to us to make sure that learning to protect data with the help of AI is the enjoyable experience that it can be — and not the dark prison that some of us fear.

Tom Pendergast is MediaPRO's Chief Learning Officer. He believes that every person cares about protecting data, they just don't know it yet. That's why he's constantly trying to devise new and easy ways to help awareness program managers educate their employees. Whether it's ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Commentary
Cyberattacks Are Tailored to Employees ... Why Isn't Security Training?
Tim Sadler, CEO and co-founder of Tessian,  6/17/2021
Edge-DRsplash-10-edge-articles
7 Powerful Cybersecurity Skills the Energy Sector Needs Most
Pam Baker, Contributing Writer,  6/22/2021
News
Microsoft Disrupts Large-Scale BEC Campaign Across Web Services
Kelly Sheridan, Staff Editor, Dark Reading,  6/15/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-34390
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel function where a lack of checks allows the exploitation of an integer overflow on the size parameter of the tz_map_shared_mem function.
CVE-2021-34391
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel�s tz_handle_trusted_app_smc function where a lack of integer overflow checks on the req_off and param_ofs variables leads to memory corruption of critical kernel structures.
CVE-2021-34392
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel where an integer overflow in the tz_map_shared_mem function can bypass boundary checks, which might lead to denial of service.
CVE-2021-34393
PUBLISHED: 2021-06-22
Trusty contains a vulnerability in TSEC TA which deserializes the incoming messages even though the TSEC TA does not expose any command. This vulnerability might allow an attacker to exploit the deserializer to impact code execution, causing information disclosure.
CVE-2021-34394
PUBLISHED: 2021-06-22
Trusty contains a vulnerability in all TAs whose deserializer does not reject messages with multiple occurrences of the same parameter. The deserialization of untrusted data might allow an attacker to exploit the deserializer to impact code execution.