How Enterprises Can Get Used to Deploying AI for Security

It's important to take a "trust journey" to see how AI technology can benefit an organization's cybersecurity.

Illustration of a man shaking hands with a robot, symbolizing how humans need to learn to trust AI
Source: Zapp2Photo via Shutterstock

It's one thing to tell organizations that artificial intelligence (AI) can spot patterns and shut down attacks better, faster, or even just more effectively than what human security analysts are capable of. It's completely a different thing to get both business leaders and security teams comfortable with the idea of giving more control and more visibility over to AI technology. One way to accomplish that is to let people try it out in a controlled environment and see what's possible, says Max Heinemeyer, director of threat hunting at Darktrace.

This isn't a process that can be rushed, Heinemeyer says. Building up trust takes time. He calls this process a "trust journey" because it's an opportunity for the organization — both security teams and business leaders — to see for themselves how AI technology would act in their organizations.

One thing they will discover is that AI is no longer an immature business, notes Heinemeyer. Rather, it is a mature business with many use cases and experiences that people can draw on during this getting-familiar period.

Beginning the Trust Journey
The trust journey relies on being able to adjust the deployment to match the organization's comfort level regarding autonomous activities, Heinemeyer notes. The degree of control the organization is willing to cede to the AI also depends a lot on its security maturity. Some organizations may carve out focused areas, such as using it completely for desktops or specific network segments. Some may just have all response activities turned off and keep the human analyst in the loop to manually handle the alerts. Or the analyst may observe how the AI handles threats, with the choice to step in as needed. 

Then there are others who are more hesitant and focus on deploying only to core servers, users, or applications and not the entire environment. Meanwhile, some are willing to deploy the technology throughout the network, but want to do so only for certain times of the day when human analyst teams are not available.

"And there are organizations who completely get it [and] want to automate as much as possible," Heinemeyer says. "They really jump in with both feet."

All of these are valid approaches because AI isn't supposed to be one-size-fits-all, Heinemeyer says. The entire point of technology is to allow it to adapt to the organization's needs and requirements, not to force the organization to do anything they aren't ready for. 

"If you want to make AI tangible for organizations and show value, you need to be able to adjust to the environment," Heinemeyer says.

Getting Sign-Off on AI
While the hands-on approach is important for getting used to the technology and understanding its capabilities, it also provides an opportunity for security teams to decide which metrics they are interested in using to measure the value of having AI take over detection and response. For example, they could compare the AI analyst with human analysts in terms of speed of detection, precision and accuracy, and time to response. Perhaps the organization cares more about the amount of time saved or the resources that are freed up to do something else.

It's often easier to have this discussion with people not in the security trenches because they can focus on the impact and the benefits, says Heinemeyer. "C-level executives, such as the CMOs, CFO, the CIO, and CEO — they are very used to understanding that automation means business benefits," he says.

C-suite executives see that faster detection means minimizing business disruption. They can calculate the costs of hiring more security analysts and building out a 24/7 security operations center. Even if the AI technology is being used just to detect and contain threats, the security team's response is different because the AI did not allow the attack to cause any damage. Automating more things minimizes potential security incidents.

When it comes to AI, "there's a lot of theorizing happening," Heinemeyer says. "At some point, people have to make a leap for the hands-on [experience] instead of just thinking theory and thought experiments."

About the Author(s)

Fahmida Y. Rashid, Managing Editor, Features, Dark Reading

As Dark Reading’s managing editor for features, Fahmida Y Rashid focuses on stories that provide security professionals with the information they need to do their jobs. She has spent over a decade analyzing news events and demystifying security technology for IT professionals and business managers. Prior to specializing in information security, Fahmida wrote about enterprise IT, especially networking, open source, and core internet infrastructure. Before becoming a journalist, she spent over 10 years as an IT professional -- and has experience as a network administrator, software developer, management consultant, and product manager. Her work has appeared in various business and test trade publications, including VentureBeat, CSO Online, InfoWorld, eWEEK, CRN, PC Magazine, and Tom’s Guide.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights