Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

10/22/2019
02:30 PM
Celeste Fralick
Celeste Fralick
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

The AI (R)evolution: Why Humans Will Always Have a Place in the SOC

In cybersecurity, the combination of men, women and machines can do what neither can do alone -- form a complementary team capable of upholding order and fighting the forces of evil.

Amber Wolff, campaign specialist at McAfee, also contributed to this article.

The 20th century was uniquely fascinated with the idea of artificial intelligence (AI). From friendly and helpful humanoid machines — think Rosie the Robot maid or C-3PO — to monolithic and menacing machines like HAL 9000 and the infrastructure of the Matrix, AI was a standard fixture in science fiction. Today, as we've entered the AI era in earnest, it's become clear that our visions of AI were far more fantasy than prophecy. But what we did get right was AI's potential to revolutionize the world around us — in the service of both good actors and bad.

Artificial intelligence has revolutionized just about every industry in which it's been adopted, including healthcare, the stock markets, and, increasingly, cybersecurity, where it's being used to both supplement human labor and strengthen defenses. Because of recent developments in machine learning, the tedious work that was once done by humans — sifting through seemingly endless amounts of data looking for threat indicators and anomalies — can now be automated. Modern AI's ability to "understand" threats, risks, and relationships gives it the ability to filter out a substantial amount of the noise burdening cybersecurity departments and surface only the indicators most likely to be legitimate.

The benefits of this are twofold: Threats no longer slip through the cracks because of fatigue or boredom, and cybersecurity professionals are freed to do more mission-critical tasks, such as remediation. AI can also be used to increase visibility across the network. It can scan for phishing by simulating clicks on email links and analyzing word choice and grammar. It can monitor network communications for attempted installation of malware, command and control communications, and the presence of suspicious packets. And it's helped transform virus detection from a solely signature-based system — which was complicated by issues with reaction time, efficiency, and storage requirements — to the era of behavioral analysis, which can detect signatureless malware, zero-day exploits, and previously unidentified threats.

But while the possibilities with AI seem endless, the idea that they could eliminate the role of humans in cybersecurity departments is about as farfetched as the idea of a phalanx of Baymaxes replacing the country's doctors. While the end goal of AI is to simulate human functions such as problem-solving, learning, planning, and intuition, there will always be things that AI cannot handle (yet), as well as things AI should not handle. The first category includes things like creativity, which cannot be effectively taught or programmed, and thus will require the guiding hand of a human. Expecting AI to effectively and reliably determine the context of an attack may also be an insurmountable ask, at least in the short term, as is the idea that AI could create new solutions to security problems. In other words, while AI can certainly add speed and accuracy to tasks traditionally handled by humans, it is very poor at expanding the scope of such tasks.

There are also the tasks that humans currently excel at that AI could potentially perform someday. But these tasks are ones that humans will always have a sizable edge in, or are things AI shouldn't be trusted with. This list includes compliance, independently forming policy, analyzing risks, or responding to cyberattacks. These are areas where we will always need people to serve as a check on AI systems' judgment, check its work, and help guide its training.

There's another reason humans will always have a place in the SOC: to stay ahead of cybercriminals who have begun using AI for their own nefarious ends. Unfortunately, any AI technology that can be used to help can also be used to harm, and over time AI will be every bit as big a boon for cybercriminals as it is for legitimate businesses.

Brute-force attacks, once on the wane due to more sophisticated password requirements, have received a giant boost in the form of AI. The technology combines databases of previously leaked passwords with publicly available social media information. So instead of trying to guess every conceivable password starting with, say, 111111, only educated guesses are made, with a startling degree of success.

In a similar way, AI can be used for spearphishing attacks. Right now, spearphishing typically must be done manually, limiting its practicality. But with a combination of data gathering and machine learning technologies, social media and other public sources can be used to "teach" the AI to write in the style of someone the target trusts, making it much more likely that the target will perform an action that allows the attacker to access sensitive data or install malicious software. Because the amount of work required for spearphishing will drop significantly at the same time the potential for payoff skyrockets, we'll no doubt see many more such attacks.

Perhaps the biggest threat, however, is that hackers will use their AI to turn cybersecurity teams' AI against them. One way this can be done is by foiling existing machine learning models, a process that's become known as "adversarial machine learning." The "learning" part of machine learning refers to the ability of the system to observe patterns in data and make assumptions about what that data means. But by inserting false data into the system, the patterns that algorithms base their decisions on can be disrupted — convincing the target AI that malicious processes are meaningless everyday occurrences, and can be safely disregarded. Some of the processes and signals that bad actors place into AI-based systems have no effect on the system itself — they merely retrain the AI to see these actions as normal. Once that's accomplished, those exact processes can be used to carry out an attack that has little chance of being caught.

Given all the ways AI can be used against us, it may be tempting for some to want to give up on AI altogether. But regardless of your feelings about it, there's no going back. As cybercriminals develop more sophisticated and more dangerous ways to utilize AI, it's become impossible for humans alone to keep up. The only solution, then, is to lean in, working to develop and deploy new advancements in AI before criminals do, while at the same time resisting the urge to become complacent. After all, the idea that there's no rest for the wicked seems to apply double to cyberattackers, and even today's most clever advancements are unlikely to stem tomorrow's threats.

The future of cybersecurity will be fraught with threats we cannot even conceive of today. But with vigilance and hard work, the combination of man and machine can do what neither can do alone — form a complementary team capable of upholding order and fighting the forces of evil.

Maybe our AI isn't so different from the movies, after all.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Turning Vision to Reality: A New Road Map for Security Leadership."

Dr. Celeste Fralick has nearly 40 years of data science, statistical, and architectural experience in eight different market segments. Currently, the chief data scientist and senior principal engineer for McAfee, Dr. Fralick has developed many AI models to detect ransomware ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
10/23/2019 | 3:41:26 PM
On Robots and AI in surgery
A long time ago a client of mine - a dentist - told me that while robots may be helpful in surgery situations, they do lack the simple ability of a finger and thumb to feel and evaluate something with human insight and not sheer programming.  Anybody remeber the first StarTrek film?  The scene when Spock told Kirk that the simple gesture of a hand-clasp was beyond V'Ger.  True there and so here too.  It may be great stuff but it cannot smell an apple or enjoy a peach. 
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Attackers' Costs Increasing as Businesses Focus on Security
Robert Lemos, Contributing Writer,  11/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: -when I told you that our cyber-defense was from another age
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2011-3349
PUBLISHED: 2019-11-19
lightdm before 0.9.6 writes in .dmrc and Xauthority files using root permissions while the files are in user controlled folders. A local user can overwrite root-owned files via a symlink, which can allow possible privilege escalation.
CVE-2019-10080
PUBLISHED: 2019-11-19
The XMLFileLookupService in NiFi versions 1.3.0 to 1.9.2 allowed trusted users to inadvertently configure a potentially malicious XML file. The XML file has the ability to make external calls to services (via XXE) and reveal information such as the versions of Java, Jersey, and Apache that the NiFI ...
CVE-2019-10083
PUBLISHED: 2019-11-19
When updating a Process Group via the API in NiFi versions 1.3.0 to 1.9.2, the response to the request includes all of its contents (at the top most level, not recursively). The response included details about processors and controller services which the user may not have had read access to.
CVE-2019-12421
PUBLISHED: 2019-11-19
When using an authentication mechanism other than PKI, when the user clicks Log Out in NiFi versions 1.0.0 to 1.9.2, NiFi invalidates the authentication token on the client side but not on the server side. This permits the user's client-side token to be used for up to 12 hours after logging out to m...
CVE-2019-19126
PUBLISHED: 2019-11-19
On the x86-64 architecture, the GNU C Library (aka glibc) before 2.31 fails to ignore the LD_PREFER_MAP_32BIT_EXEC environment variable during program execution after a security transition, allowing local attackers to restrict the possible mapping addresses for loaded libraries and thus bypass ASLR ...