Sponsored By

Finding the AI ROI

Is AI a good security investment? Many say yes, but it depends on how you deploy your artificial intelligence.

Simon Marshall

October 5, 2017

4 Min Read

The application of AI into security has an exciting future, but is it really paying its way? Can anyone in security, hand on heart, say definitively that AI is keeping them ahead of bad actors and automated infections?

Despite AI-based security being relatively green, the answer is "yes" according to a new US and Europe study that looked at AI security detection and prevention rates versus security teams using security software sans AI.

About three quarters of those surveyed said they had been able to prevent more breaches due to AI-powered tools. In about 80% of issues, AI was quicker than regular security teams at spotting threats.

Apart from AI raising the detection and prevention bar significantly, the study suggests that teams are using AI as an adjunct to their wisdom and skill, but leaning on AI to give the team a head-start.

Conventional wisdom that very early adopters find it hard to adopt and profit from new technologies doesn't seem to apply here, even though AI is not just a new technology, it's a new paradigm. So do security professionals feel that crossed the chasm into the new paradigm?

Daniel Doimo, president and COO at Cylance, which published the survey results, said, "Executives that were first to make the leap of faith in AI have been the first to begin experiencing the rewards, particularly in the prevention of cyberattacks. Over the next year, I only expect to see this trend accelerate."

About 65% of security heads stated they expected to reach ROI within two years. For the have-nots, eight in ten are confident their boards and the C-suite are on the case when it comes to prioritizing AI adoption. The ROI comes primarily from AI being an automation technology, freeing teams to work on other projects. Also, enterprise machines go down less frequently, saving troubleshooting resources and therefore minimizing the cost of stray data.

Cylance's AI model is based on attempting to eliminate files being accessed and run by interrogating them and either lowering a barrier or lifting it. Theoretically, this means within milliseconds, there's a determination of whether the file is malicious or not.

I had a one-on-one with Cylance's data scientist to find out more about how new AI technology is being advanced in 'generations' of capability. Today's highest level of AI technology -- generation three -- is not simply flying from the nest into the wild, and continues to be beak-fed until it matures further.

Nurturing AI from first to second generation involves adding more features on top of the base engine, but being careful not to interpret the addition of more features as resulting in greater detection accuracy. Getting from second to third base involves greater operational sophistication in addition to further technical machine learning improvements.

"Entry into third generation cannot be achieved only by increasing the number of AI features and [machine learning] samples," Homer Strong, chief data scientist at Cylance, told SecurityNow.

"[However,] a major practical obstacle to entering the third generation is to find efficient ways for malware analysts to provide feedback and oversight over the model." So, in a testing environment, AI is heavily reliant on development teams to tell it what to do and how to perform.

Want to learn more about the technology and business opportunities and challenges for the cable industry in the commercial services market? Join Light Reading in New York on November 30 for the 11th annual Future of Cable Business Services event. All cable operators and other service providers get in free.

The objective of third-generation AI is to provide a prevention-first mode of operation at the perimeter. Yet, more and more businesses are busy resigning themselves to the notion that an attack is inevitable, and therefore they're looking to protect data already in network systems within the perimeter.

"Unfortunately, I would agree that we do see this sort of stance in the market frequently. And frankly, we, the security vendors, are partly to blame," said Steve Salinas, Cylance's senior product marketing manager.

AI is somewhat reversing this resignation back to the ideology of security at the perimeter -- because it is faster than standard technology. "Security vendors have done a really good job of convincing the market that compromise is inevitable. Cylance disagrees," he said.

Related posts:

— Simon Marshall, Technology Journalist, special to Security Now

Read more about:

Security Now

About the Author(s)

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights