Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

checkLoop 1checkLoop 2checkLoop 3
11/18/2019
02:00 PM
John McClurg
John McClurg
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Human Nature vs. AI: A False Dichotomy?

How the helping hand of artificial intelligence allows security teams to remain human while protecting themselves from their own humanity being used against them.

Nobel Prize-winning novelist Anatole France famously opined: "It is human nature to think wisely and act foolishly." As a species, we're innately designed with — as far as our awareness extends — the highest, most profound levels of intellect, knowledge, and insight in our vast, infinite universe. But this does not equate to omniscience or absolute precision.

Humans are by no stretch of the imagination perfect. We feel pressured, we get stressed, life happens, and we end up making mistakes. It's so inevitable, in fact, that it's essentially hardwired into our DNA. And for better or worse, this aspect of human nature is both perfectly natural and resolutely expected. In most cases, the human predilection to screw up is evened out by a dogged pursuit of rectification. But in cybersecurity terms, this intent and journey happens all too slowly; this is a realm where simple mistakes can result in dire consequences at the blink of an eye.

To place this into context, a simple hack or breach can result in the loss of billions of dollars; the complete shutdown of critical infrastructure such as electric grids and nuclear power plants; the leak of classified government information; the public release of unquantifiable amounts of personal data. In many instances, these all too real "hypotheticals" — the collapse of economies, the descent of cities into chaos, the compromise of national security or the theft of countless identities — can all potentially be pinpointed back to human error around cybersecurity.

With so much at stake, it's not unsurprising that many CISOs are not confident in their employees' abilities to safeguard data. That's because most of the cybersecurity solutions used by a majority of the workforce are difficult to use and cause well-intentioned workarounds that open up new vulnerabilities in order for employees to simply be productive at their jobs.

Malicious actors don't just realize this; they use it to their ultimate advantage. Employees are only human, and social engineers excel when it comes to exploiting our human nature. But we don't want employees jettisoning their all-too-precious humanity in order to protect themselves against the ill-intentioned wiles of social engineers. Enter the helping hand of artificial intelligence (AI), which allows employees to remain human while protecting them from their own humanity being used against them. Adaptive security that's powered by AI can make up for the human error that we know can and will happen.

Employees, myself included, need help staying secure in the workplace because we're easily prone to distraction and being tricked. The goal is to never make mistakes or open companies up to vulnerability. But as France put it, our wisdom is sometimes superseded by our brash, spontaneous, emotionally-driven actions.

But can artificial intelligence really be trusted to make up for human error? Well, it all depends on who's answering this question and their perception of AI. Those with a level-headed view of AI not deeply rooted in science fiction or Hollywood tropes mostly agree that AI is an effective tool for catching and circumventing careless human error because it's unburdened by the feeble aspects, or foibles, of human nature and the cognitive limits of rationality inherent in it. As IBM's Ginni Rometty puts it: "Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence."

AI bridges the gap between work productivity and security, bringing to fruition the concept of "invisible security" that creates a line of defense that can essentially be categorized as human-nature-proof. The fact of the matter is today's threat vectors now morph at the speed of light, or the speed of human-enabled artificial intelligence. With the help of AI and machine learning, employees possess a strong fighting chance against the theft of corporate data by malicious actors who also utilize these high-speed models and algorithms to achieve their nefarious goals.

That being said, any debate that would position the trustworthiness of humans against AI is grounded on a false dichotomy in that AI has yet to advance to the level of sentience where it can truly act or function without human intervention.

Humans and AI actually make up for each other's weaknesses — AI compensating for human nature's cognitive limits of rationality and error, and humans serving as the wizard behind AI's Oz, imbuing the technology with as much or as little power as we deem appropriate. When paired correctly and responsibly, human nature and AI can combine in a copacetic manner to foster the strongest levels of enterprise cybersecurity.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Soft Skills: 6 Nontechnical Traits CISOs Need to Succeed."

John McClurg is Blackberry's chief information security officer. In this role, he leads all aspects of BlackBerry's information security program globally, ensuring the development and implementation of cybersecurity policies and procedures. John comes to BlackBerry from ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
MonikaGehts
50%
50%
MonikaGehts,
User Rank: Apprentice
11/26/2019 | 7:33:19 AM
A long way to
I think that the AI is impossible until humans know only up to 10% of their brains possibility/mighty. If the percentage is less then 100%, the AI can be really dangerous for human beings.
Data Leak Week: Billions of Sensitive Files Exposed Online
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/10/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Our Endpoint Protection system is a little outdated... 
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19777
PUBLISHED: 2019-12-13
stb_image.h (aka the stb image loader) 2.23, as used in libsixel and other products, has a heap-based buffer over-read in stbi__load_main.
CVE-2019-19778
PUBLISHED: 2019-12-13
An issue was discovered in libsixel 1.8.2. There is a heap-based buffer over-read in the function load_sixel at loader.c.
CVE-2019-16777
PUBLISHED: 2019-12-13
Versions of the npm CLI prior to 6.13.4 are vulnerable to an Arbitrary File Overwrite. It fails to prevent existing globally-installed binaries to be overwritten by other package installations. For example, if a package was installed globally and created a serve binary, any subsequent installs of pa...
CVE-2019-16775
PUBLISHED: 2019-12-13
Versions of the npm CLI prior to 6.13.3 are vulnerable to an Arbitrary File Write. It is possible for packages to create symlinks to files outside of thenode_modules folder through the bin field upon installation. A properly constructed entry in the package.json bin field would allow a package publi...
CVE-2019-16776
PUBLISHED: 2019-12-13
Versions of the npm CLI prior to 6.13.3 are vulnerable to an Arbitrary File Write. It fails to prevent access to folders outside of the intended node_modules folder through the bin field. A properly constructed entry in the package.json bin field would allow a package publisher to modify and/or gain...
checkLoop 4