What Cybersecurity Pros Really Think About Artificial Intelligence
While there's a ton of unbounded optimism from vendor marketing and consultant types, practitioners are still reserving a lot of judgment.
March 13, 2020
![](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt8348d1ee5d83d859/64f0d45b579b0339389b78c6/01-aiandhuman.png?width=700&auto=webp&quality=80&disable=upscale)
The cybersecurity industry has been targeted by technology and business leaders as one of the top advanced use cases for artificial intelligence (AI) and machine learning (ML) in the enterprise today. According to the latest studies, AI technology in cybersecurity is poised to grow over 23% annually through the second half of the decade. That'll have the cybersecurity AI market growing from $8.8 billion last year to $38.2 billion by 2026.
The question seasoned cybersecurity veterans are asking themselves right now is, "How much does AI really help security postures and security operations?" There's a ton of unbounded optimism from the vendor marketing and consultant types, but practitioners are still reserving a lot of judgment. As we piece together the surveys of cybersecurity industry perceptions, it becomes clear that a big part of the industry's evolution in the 2020s will be how it can effectively balance AI and human intelligence. Here's what the data shows at the moment.
According to a study last year by Capgemini Research Institute, before 2019 only about one in five cybersecurity organizations used AI in their technology stacks. But Capgemini researchers said "adoption is poised to skyrocket," with about 63% of organizations planning AI deployments by the end of 2020. The use cases with the highest potential tend to be in operational technology (OT) and the Internet of Things (IoT).
Even as the AI-powered cybersecurity freight train barrels forward, many security professionals believe that human intelligence will still offer the best results on a case-by-case basis. A recent study conducted by White Hat Security at last month's RSA Conference showed that 60% of security professionals are still more confident in cyberthreat findings verified by humans over those generated by AI. Around a third of respondents said intuition is the most important human element fueling human analysis, 21% said creativity is the human advantage, and 20% said previous human experience and frame-of-reference are what make people crucial to the security operational process.
A study by Osterman Research showed that part of the problem in this early stage of deployment is a strong perception that AI isn't quite ready for showtime yet. Some of the common complaints included issues with inaccurate results, the performance trade-offs of placing certain types of AI platforms on the endpoint, difficulty of use, and the ever-present concern over false-positives.
Misgivings about an over-reliance on AI also come from the fact that cybersecurity pros believe the jobs they do are too complex to be replicated by a machine. Findings from a Ponemon Report last year show that over half of security pros said they wouldn't be able to train AI to do the tasks their teams perform, and that they are more qualified to catch threats in real time. Almost half also reported that human intervention is a necessity in network protection.
Nevertheless, cybersecurity executives increasingly believe that AI is crucial to increasing response times and reducing the cost of preventing breaches. According to Capgemini, three in four executive said AI in cybersecurity speeds up breach response -- both in detection and remediation. And around 64% said it also reduces the cost of detection and response.
In spite of the misgivings about an over-reliance on AI, consensus seems to be building for a middle-ground approach, where AI is no magic wand but can be a helpful way of augmenting human intelligence in the SOC and throughout the security organization. Approximately 70% of security professionals agreed that AI makes teams more efficient by eliminating as much as 55% of their manual tasks, according to the White Hat study. That's helping them focus on more important tasks and reducing stress levels.
In spite of the misgivings about an over-reliance on AI, consensus seems to be building for a middle-ground approach, where AI is no magic wand but can be a helpful way of augmenting human intelligence in the SOC and throughout the security organization. Approximately 70% of security professionals agreed that AI makes teams more efficient by eliminating as much as 55% of their manual tasks, according to the White Hat study. That's helping them focus on more important tasks and reducing stress levels.
The cybersecurity industry has been targeted by technology and business leaders as one of the top advanced use cases for artificial intelligence (AI) and machine learning (ML) in the enterprise today. According to the latest studies, AI technology in cybersecurity is poised to grow over 23% annually through the second half of the decade. That'll have the cybersecurity AI market growing from $8.8 billion last year to $38.2 billion by 2026.
The question seasoned cybersecurity veterans are asking themselves right now is, "How much does AI really help security postures and security operations?" There's a ton of unbounded optimism from the vendor marketing and consultant types, but practitioners are still reserving a lot of judgment. As we piece together the surveys of cybersecurity industry perceptions, it becomes clear that a big part of the industry's evolution in the 2020s will be how it can effectively balance AI and human intelligence. Here's what the data shows at the moment.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024