News, news analysis, and commentary on the latest trends in cybersecurity technology.

IriusRisk Brings Threat Modeling to Machine Learning Systems

The newly launched AI & ML Security Library allows developers to analyze the code used in machine learning systems to identify and address risks.

Two yellow hands holding a thread that looks like a bridge while a silhouette of a person tries to walk across.
Source: lorenzo rossi via Alamy Stock Photo

As part of "shift left" to incorporate security discussions earlier in the software development life cycle, organizations are beginning to look at threat modeling to identify security flaws in software design. With developers increasingly incorporating machine learning in their applications, threat modeling is necessary for identifying the risks to the organization.

"People are still grappling with the whole idea that when you use that very new technology [machine learning], it brings along a bunch of risk, as well," says Gary McGraw, co-founder of the Berryville Institute of Machine Learning. "I've been in the unenviable position of saying, 'Well, there's this risk, and there's that risk, and the sky is falling,' and everybody goes, 'Well, what am I supposed to do about that?'"

There have been many conversations about machine learning risk, but the difficulty lies in figuring out how to address them, McGraw says. Threat modeling – identifying the types of threats that can cause harm to the organization – helps organizations think through security risks in machine learning systems such as data poisoning, input manipulation, and data extraction. If developers could understand the security flaws in their designs by threat modeling, it would reduce the time spent on security testing during development and before production. NIST's Guidelines on Minimum Standards for Developer Verification of Software recommends threat modeling to look for design-level security issues.

IriusRisk’s threat modeling tool addresses this challenge by automating both threat modeling and architecture risk analysis. Developers and security teams can import the code into the tool to generate diagrams and threat models. Threat modeling templates make threat modeling accessible even to those not familiar with diagramming tools or risk analysis.

And the newly launched AI & ML Security Library allows organizations using IriusRisk to threat model the machine learning system they are planning in order to understand what the security risks are, as well as how to mitigate those risks.

“We're finally getting around to building machinery that people can use to address the risk and control the risk," says McGraw, who is also a member of IriusRisk’s advisory board. "When you put machine learning into your [system] design, and you're using IriusRisk, now you know what risks are involved and what to do about that."

What ML Threat Modeling Looks Like

IriusRisk's AI & ML Security Library helps organizations ask necessary questions. For example:

  1. Asking where the data being used to train the machine learning model came from. It's important to also ask whether anyone had the opportunity to embed incorrect or malicious data to make the machine do the wrong thing.

  2. Consider how the machine keeps learning once it is in production. Machine learning systems that are online and keep on learning from users are more dangerous than the ones that are not online. "It depends on who is using it. Is it your people? Is it bad people? Is it everybody on Twitter, or X?" McGraw says, noting there have been examples of past projects that had to be taken offline after it learned objectionable information.

  3. Ask if confidential information can be extracted from the machine. If you put confidential information into your machine learning algorithm, it is not protected by cryptographic means and can be extracted. "If you put the data in the machine, it's in the machine," McGraw says. "You need to think about making sure that people using your machine learning system cannot extract that confidential data."

The AI & ML Security Library is based on the BIML ML Security Risk Framework, a taxonomy of machine learning threats, as well as an architectural risk assessment of typical machine learning components developed by McGraw. The framework is designed to be used by developers, engineers, and designers creating applications and services that use machine learning in the early design and development phases of the project. With IriusRisk's library, everybody who is using machine learning can use BIML's framework.

The AI & ML Security Library is available to IriusRisk customers and those using the community edition of the platform.

Time to Be Threat Modeling

The AI & ML Security Library was developed in response to interest from organizations about how to analyze and secure AI and ML systems, according to Stephen de Vries, CEO of IriusRisk.

"We have seen a surge in interest from our customers in the finance and technology sectors for guidance on how to analyze, and secure design ML systems," de Vries said in a statement. "Since these are often new projects that are still in the design phase, performing threat modeling here adds a lot of value, because those teams will very quickly understand where the security goalposts are – and what they need to do in order to get there."

The library doesn't help organizations that don't have visibility into their machine learning use. Just as organizations can have shadow IT – where different business stakeholders set up their own servers and Web applications without IT oversight – they can also have shadow machine learning, McGraw says. Different departments are trying out new applications and tools, but there is a gap between what individual employees are using and what risks IT and security teams know about.

“Everybody's like, 'I don't think I have any machine learning in my organization,'" McGraw says. "But as soon as they find out that they do … they find it everywhere.”

Many organizations do not incorporate threat modeling during software design, and those that do rely on manual processes where a person analyzes the threats one at a time.

"If you have a mature threat modeling program and you're using a tool like IriusRisk, you can also now handle machine learning. So the people who are already doing the best are going to do even better," McGraw says. "What about the people who aren't doing threat modeling? Maybe they should start. It's not new. It's time to do it."

About the Author(s)

Fahmida Y. Rashid, Managing Editor, Features, Dark Reading

As Dark Reading’s managing editor for features, Fahmida Y Rashid focuses on stories that provide security professionals with the information they need to do their jobs. She has spent over a decade analyzing news events and demystifying security technology for IT professionals and business managers. Prior to specializing in information security, Fahmida wrote about enterprise IT, especially networking, open source, and core internet infrastructure. Before becoming a journalist, she spent over 10 years as an IT professional -- and has experience as a network administrator, software developer, management consultant, and product manager. Her work has appeared in various business and test trade publications, including VentureBeat, CSO Online, InfoWorld, eWEEK, CRN, PC Magazine, and Tom’s Guide.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights