Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Analytics

5/30/2018
02:30 PM
Craig Hinkley
Craig Hinkley
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Machine Learning, Artificial Intelligence & the Future of Cybersecurity

The ability to learn gives security-focused AI and ML apps unrivaled speed and accuracy over their more basic, automated predecessors. But they are not a silver bullet. Yet.

Machine learning (ML) and artificial intelligence (AI) are not what most people imagine them to be. Far removed from R2-D2 or WALL-E, today's bots, sophisticated algorithms, and hyperscale computing can "learn" from past experiences to influence future outcomes.

This ability to learn gives cybersecurity-focused Al and ML applications unrivaled speed and accuracy over their more basic, automated predecessors. This might sound like the long-awaited silver bullet, but AI and ML are unlikely, at least in the near future, to deliver the much-heralded "self-healing network." The technology does, however, bring to the table a previously unavailable smart layer that forms a critical first-response defense from hackers.

The Double-Edged Sword
AI and ML would be complete game changers for cybersecurity teams if not for the fact that hackers have also embraced the technologies. This means that, although AI and ML form an increasing part of the cybersecurity solution, they more frequently contribute to the cybersecurity problem.

So, when thinking about AI and ML, it's important not to take an insular approach. Don't just focus on what your company needs in isolation. Consider what your competitors might be adopting in regard to scanning technology for locating security defects in code or vulnerabilities in production — and how you can best keep up. Think about what hackers could be deploying — and how you can counter it. Working in this way will help identify the new policies, procedures, processes, and countermeasures that must be put in place to keep your organization safe and to get the full benefit from any investment in AI and ML.

Cybersecurity Job Prospects
When the IT world first started talking about AI and ML, there was a deep-rooted concern that "the robots" would take over human jobs. In the cybersecurity sector, nothing could be further from the truth. No enterprise actually wants to give up human control of their security systems and, in fact, most organizations will need more security experts and data scientists to operate or "teach" the software.

Let's take a minute to understand why. Without human monitoring and continuous input, the current generation of AI and ML software cannot reliably learn and adapt; neither can it highlight when the data sets it relies on are becoming corrupted, question whether its conclusions are correct, or guarantee compliance. Indeed, most AI and ML projects fail when either the software hasn't been programmed to ask the right questions in order to learn, or, when trying to learn, the software is presented with flawed data. More will fail in the future if they cannot demonstrate compliance with global legislation and industry-specific regulations. 

Longer term, use of AI and ML to combat cybersecurity threats might bring about closer coordination between cybersecurity professionals and data scientists. It's not unfeasible that cybersecurity teams might recruit data scientists or that companies will begin to look for cybersecurity experts with specific data science expertise. Eventually both roles and disciplines could even merge.

So, far from discouraging graduates to study cybersecurity, AI, and data science, the growth in both technologies should encourage students to take these courses and acquire some specialization in the field. Looking broadly across the IT security sector, the current skills and knowledge gap is unlikely to go away — and, in fact, as companies struggle to understand AI on a practical level, the number of open job vacancies could increase.

Who Is in Charge?
It's important that we as humans don't lose the capacity to oversee and manage AI and ML technology — in particular, that we don't abdicate responsibility for the outcomes produced by AI and ML software. The law has some catching up to do in this regard, but we already are seeing a lot more written about AI and ML transparency, trustworthiness, and interoperability — particularly for those using AI or ML within regulated markets such as banking and insurance.

It is a brave new world out there. So, stay abreast of new AI- and ML-based cybersecurity technologies, products, and services. Some of these are going to be real industry turning points, and you don't want to be the last person finding out about them. As AI and ML begin to play even more direct and obvious roles in IT infrastructures, it's vital for cybersecurity folks to keep their knowledge current and relevant. Try to get to at least one conference a year on the topic, jump on a webinar once a quarter, read some quality independent research each month so you have a real feel for what's happening out there.

This is the next frontier, and it's time to boldly go.

Related Content:

Craig Hinkley joined WhiteHat Security as CEO in early 2015, bringing more than 20 years of executive leadership in the technology sector to this role. Craig is driving a customer-centric focus throughout the company and has broadened WhiteHat's global brand and visibility ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
artificial intelligence
50%
50%
artificial intelligence,
User Rank: Apprentice
7/20/2018 | 1:57:05 AM
Artificial intelligence
  • Very useful blog with clear standardisation and great quality.
Data Privacy Protections for the Most Vulnerable -- Children
Dimitri Sirota, Founder & CEO of BigID,  10/17/2019
Sodinokibi Ransomware: Where Attackers' Money Goes
Kelly Sheridan, Staff Editor, Dark Reading,  10/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
2019 Online Malware and Threats
2019 Online Malware and Threats
As cyberattacks become more frequent and more sophisticated, enterprise security teams are under unprecedented pressure to respond. Is your organization ready?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-18214
PUBLISHED: 2019-10-19
The Video_Converter app 0.1.0 for Nextcloud allows denial of service (CPU and memory consumption) via multiple concurrent conversions because many FFmpeg processes may be running at once. (The workload is not queued for serial execution.)
CVE-2019-18202
PUBLISHED: 2019-10-19
Information Disclosure is possible on WAGO Series PFC100 and PFC200 devices before FW12 due to improper access control. A remote attacker can check for the existence of paths and file names via crafted HTTP requests.
CVE-2019-18209
PUBLISHED: 2019-10-19
templates/pad.html in Etherpad-Lite 1.7.5 has XSS when the browser does not encode the path of the URL, as demonstrated by Internet Explorer.
CVE-2019-18198
PUBLISHED: 2019-10-18
In the Linux kernel before 5.3.4, a reference count usage error in the fib6_rule_suppress() function in the fib6 suppression feature of net/ipv6/fib6_rules.c, when handling the FIB_LOOKUP_NOREF flag, can be exploited by a local attacker to corrupt memory, aka CID-ca7a03c41753.
CVE-2019-18197
PUBLISHED: 2019-10-18
In xsltCopyText in transform.c in libxslt 1.1.33, a pointer variable isn't reset under certain circumstances. If the relevant memory area happened to be freed and reused in a certain way, a bounds check could fail and memory outside a buffer could be written to, or uninitialized data could be disclo...