News, news analysis, and commentary on the latest trends in cybersecurity technology.

Startup Aims to Secure AI, Machine Learning Development

With security experts warning against attacks on machine learning models and data, startup HiddenLayer aims to protect the neural networks powering AI-augmented products.

3 Min Read
Illustration of robot hand cupping a glowing lock icon surrounded by icons representing applications and networks
Source: Pitinan Piyavatin via Alamy Stock Photo

As companies increasingly add artificial intelligence (AI) capabilities to their product portfolios, cybersecurity experts warn that the machine learning (ML) components are vulnerable to new types of attacks and need to be protected.

Startup HiddenLayer, which launched on July 19, aims to help companies better protect their sensitive machine-learning models and the data used to train those models. The company has released its first products aimed at the ML detection and response segment, aiming to harden models against attack as well as protect the data used to train those models.

The risks are not theoretical: The company's founders worked at Cylance when researchers found ways to bypass the company's AI engine for detecting malware, says Christopher Sestito, CEO of HiddenLayer.

"They attacked the model through the product itself and interacted with the model enough to ... determine where the model was weakest," he says.

Sestito expects attacks against AI/ML systems to grow as more companies incorporate the features into their products.

"AI and ML are the fastest growing technologies we have ever seen, so we expect them to be the fastest growing attack vectors that we have ever seen as well," he says.

Flaws in the Machine Learning Model

ML has become a must-have for many companies' next generation of products, but businesses typically add AI-based features without considering the security implications. Among the threats are model evasion, such as the research conducted against Cylance, and functional extraction, where attackers can query a model and construct a functional equivalent system based on the outputs.

Two years ago, Microsoft, MITRE, and other companies created the Adversarial Machine Learning Threat Matrix to catalog the potential threats against AI-based systems. Now rebranded as the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS), the dictionary of possible attacks highlights that innovative technologies will attract innovative attacks.

"Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms," according to the ATLAS project page on GitHub. "Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle."

The practical threat is well known to the three founders of HiddenLayer — Sestito, Tanner Burns, and James Ballard — who worked together at Cylance. Back then, researchers at Skylight Cyber appended known good code — actually, a list of strings from the game Rocket League's executable — to fool Cylance's technology into believing that 84% of malware was actually benign.

"We led the relief effort after our machine learning model was attacked directly through our product and realized this would be an enormous problem for any organization deploying ML models in their products," Sestito said in a statement announcing HiddenLayer's launch.

Looking for Adversaries in Real Time

HiddenLayer aims to create a system that can monitor the operation of ML systems and, without needing access to the data or calculations, determine whether the software is being attacked using one of the known adversarial methods.

"We are looking at the behavioral interactions with the models — it could be an IP address or endpoint," Sestito says. "We are analyzing whether the model is being used as it is intended to be used or if the inputs and outputs are being leveraged or is the requester making very high entropy decisions."

The ability to do behavioral analysis in real time sets the company's ML detection and response apart from other approaches, he says. In addition, the technology does not require access to the specific model or the training data, further insulating intellectual property, HiddenLayer says.

The approach also means that the overhead from the security agent is small, on the order of 1 or 2 milliseconds, says Sestito.

"We are looking at the inputs after the raw data has been vectorized, so there is very little performance hit," he says.

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights