Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

11/18/2019
02:00 PM
John McClurg
John McClurg
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Human Nature vs. AI: A False Dichotomy?

How the helping hand of artificial intelligence allows security teams to remain human while protecting themselves from their own humanity being used against them.

Nobel Prize-winning novelist Anatole France famously opined: "It is human nature to think wisely and act foolishly." As a species, we're innately designed with — as far as our awareness extends — the highest, most profound levels of intellect, knowledge, and insight in our vast, infinite universe. But this does not equate to omniscience or absolute precision.

Humans are by no stretch of the imagination perfect. We feel pressured, we get stressed, life happens, and we end up making mistakes. It's so inevitable, in fact, that it's essentially hardwired into our DNA. And for better or worse, this aspect of human nature is both perfectly natural and resolutely expected. In most cases, the human predilection to screw up is evened out by a dogged pursuit of rectification. But in cybersecurity terms, this intent and journey happens all too slowly; this is a realm where simple mistakes can result in dire consequences at the blink of an eye.

To place this into context, a simple hack or breach can result in the loss of billions of dollars; the complete shutdown of critical infrastructure such as electric grids and nuclear power plants; the leak of classified government information; the public release of unquantifiable amounts of personal data. In many instances, these all too real "hypotheticals" — the collapse of economies, the descent of cities into chaos, the compromise of national security or the theft of countless identities — can all potentially be pinpointed back to human error around cybersecurity.

With so much at stake, it's not unsurprising that many CISOs are not confident in their employees' abilities to safeguard data. That's because most of the cybersecurity solutions used by a majority of the workforce are difficult to use and cause well-intentioned workarounds that open up new vulnerabilities in order for employees to simply be productive at their jobs.

Malicious actors don't just realize this; they use it to their ultimate advantage. Employees are only human, and social engineers excel when it comes to exploiting our human nature. But we don't want employees jettisoning their all-too-precious humanity in order to protect themselves against the ill-intentioned wiles of social engineers. Enter the helping hand of artificial intelligence (AI), which allows employees to remain human while protecting them from their own humanity being used against them. Adaptive security that's powered by AI can make up for the human error that we know can and will happen.

Employees, myself included, need help staying secure in the workplace because we're easily prone to distraction and being tricked. The goal is to never make mistakes or open companies up to vulnerability. But as France put it, our wisdom is sometimes superseded by our brash, spontaneous, emotionally-driven actions.

But can artificial intelligence really be trusted to make up for human error? Well, it all depends on who's answering this question and their perception of AI. Those with a level-headed view of AI not deeply rooted in science fiction or Hollywood tropes mostly agree that AI is an effective tool for catching and circumventing careless human error because it's unburdened by the feeble aspects, or foibles, of human nature and the cognitive limits of rationality inherent in it. As IBM's Ginni Rometty puts it: "Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence."

AI bridges the gap between work productivity and security, bringing to fruition the concept of "invisible security" that creates a line of defense that can essentially be categorized as human-nature-proof. The fact of the matter is today's threat vectors now morph at the speed of light, or the speed of human-enabled artificial intelligence. With the help of AI and machine learning, employees possess a strong fighting chance against the theft of corporate data by malicious actors who also utilize these high-speed models and algorithms to achieve their nefarious goals.

That being said, any debate that would position the trustworthiness of humans against AI is grounded on a false dichotomy in that AI has yet to advance to the level of sentience where it can truly act or function without human intervention.

Humans and AI actually make up for each other's weaknesses — AI compensating for human nature's cognitive limits of rationality and error, and humans serving as the wizard behind AI's Oz, imbuing the technology with as much or as little power as we deem appropriate. When paired correctly and responsibly, human nature and AI can combine in a copacetic manner to foster the strongest levels of enterprise cybersecurity.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Soft Skills: 6 Nontechnical Traits CISOs Need to Succeed."

John McClurg is Blackberry's chief information security officer. In this role, he leads all aspects of BlackBerry's information security program globally, ensuring the development and implementation of cybersecurity policies and procedures. John comes to BlackBerry from ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
MonikaGehts
50%
50%
MonikaGehts,
User Rank: Apprentice
11/26/2019 | 7:33:19 AM
A long way to
I think that the AI is impossible until humans know only up to 10% of their brains possibility/mighty. If the percentage is less then 100%, the AI can be really dangerous for human beings.
Navigating Security in the Cloud
Diya Jolly, Chief Product Officer, Okta,  12/4/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19642
PUBLISHED: 2019-12-08
On SuperMicro X8STi-F motherboards with IPMI firmware 2.06 and BIOS 02.68, the Virtual Media feature allows OS Command Injection by authenticated attackers who can send HTTP requests to the IPMI IP address. This requires a POST to /rpc/setvmdrive.asp with shell metacharacters in ShareHost or ShareNa...
CVE-2019-19637
PUBLISHED: 2019-12-08
An issue was discovered in libsixel 1.8.2. There is an integer overflow in the function sixel_decode_raw_impl at fromsixel.c.
CVE-2019-19638
PUBLISHED: 2019-12-08
An issue was discovered in libsixel 1.8.2. There is a heap-based buffer overflow in the function load_pnm at frompnm.c, due to an integer overflow.
CVE-2019-19635
PUBLISHED: 2019-12-08
An issue was discovered in libsixel 1.8.2. There is a heap-based buffer overflow in the function sixel_decode_raw_impl at fromsixel.c.
CVE-2019-19636
PUBLISHED: 2019-12-08
An issue was discovered in libsixel 1.8.2. There is an integer overflow in the function sixel_encode_body at tosixel.c.