News, news analysis, and commentary on the latest trends in cybersecurity technology.
Cybersecurity Will Account for Nearly One-Quarter of AI Software Market Through 2025
A boom in artificial intelligence-powered detection and remediation tools pushes security spending to the top of the AI market, according to Forrester.
By 2025, the artificial intelligence (AI) software market will expand from 2021's $33 billion to $64 billion, according to a new report. And cybersecurity is the fastest-growing category of AI spend, experiencing a rise in spending of 22.3% compound annual growth rate (CAGR).
That's according to the "Global AI Software Forecast 2022" from Forrester Research. "Cybersecurity is the fastest AI software growth category, with a focus on the real-time monitoring of and response to attacks," the report states. The next two categories, customer and human capital management (22%) and process optimization, knowledge, and data intelligence (18.3%), also have cybersecurity elements, so the impact on security tool makers could be even more significant.
This comports with the emphasis companies have placed on their AI-enhanced software and services. For example, credit behemoth Visa revealed it has spent a half-billion dollars on data analytics and AI in the past five years. It's using those tools, along with conventional cybersecurity measures, to keep the fraud rate at what Visa calls historic lows despite e-commerce growth.
Organizations can deploy AI for cybersecurity anywhere there's repetitive actions and expected behavior, including attack surface management, extended detection and response (XDR), and user and entity behavior analytics (UEBA). Forrester calls out SentinelOne as a prime example of an XDR success story, pointing out the company's 120% year-over-year revenue growth in fiscal 2022. In March, SentinelOne added identity threat detection and response to its platform when it acquired Attivo Networks.
An AI tool can learn what normal activity from a particular device or account is and then flag when that endpoint is acting outside of the norm. Such automated detection is invaluable, considering the impossibility of staffing sufficiently to have human eyes watching every part of the network. And researchers are finding ways to apply large language models like GPT-3 to practical tasks, such as tracing networks of exploit forums. To provide some perspective on such developments, Dark Reading released a report in September, "How Machine Learning, AI & Deep Learning Improve Cybersecurity," about how to assess a vendor's AI claims and define its success criteria.
One hitch in AI's gallop is the challenge of setting up a system so that it flags what is necessary for human analysts to assess without creating alert fatigue. A survey earlier in 2022 revealed that almost half (46%) of IT security staff said their AI systems created too many false-positive alerts for them to address. An optimist would see the false-positive problem as an opportunity for growth, however, opening up a new market for fine-tuning services.
For more insights, visit the Forrester Research blog entry about the report.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024