Intel Discloses Max Severity Bug in Its AI Model Compression Software

The improper input validation issue in Intel Neural Compressor enables remote attackers to execute arbitrary code on affected systems.

3 Min Read
Intel AI poster at trade show booth
Source: flowgraph via Shutterstock

Intel has disclosed a maximum severity vulnerability in some versions of its Intel Neural Compressor software for AI model compression.

The bug, designated as CVE-2024-22476, provides an unauthenticated attacker with a way to execute arbitrary code on Intel systems running affected versions of the software. The vulnerability is the most serious among dozens of flaws the company disclosed in a set of 41 security advisories this week.

Improper Input Validation

Intel identified CVE-2024-22476 as stemming from improper input validation, or a failure to properly sanitize user input. The chip maker has given the vulnerability a maximum score of 10 on the CVSS scale because the flaw is remotely exploitable with low complexity and has a high impact on data confidentiality, integrity, and availability. An attacker does not require any special privileges, and neither is user interaction required for an exploit to work.

The vulnerability affects Intel Neural Compressor versions before 2.5.0. Intel has recommended that organizations using the software upgrade to version 2.5.0 or later. Intel's advisory indicated that the company learned of the vulnerability from an external security researcher or entity whom the company did not identify.

Intel Neural Compressor is an open source Python library that helps compress and optimize deep learning models for tasks such as computer vision, natural language processing, recommendation systems, and a variety of other use cases. Techniques for compression include neural network pruning — or removing the least important parameters; reducing memory requirements via process call quantization; and distilling a larger model to a smaller one with similar performance. The goal with AI model compression technology is to help enable the deployment of AI applications on diverse hardware devices, including those with limited or constrained computational power, such as mobile devices.

One Among Many

CVE-2024-22476 is actually one of two vulnerabilities in Intel's Neural Compressor software that it disclosed — and for which it released a fix — this week. The other is CVE-2024-21792, a time-of-check-time-of-use (TOCTOU) flaw that could result in information disclosure. Intel assessed the flaw at presenting only a moderate risk because, among other things, it requires an attacker to already have local, authenticated access to a vulnerable system to exploit it.

In addition to the Neural Compressor flaws, Intel also disclosed five high-severity privilege escalation vulnerabilities in its UEFI firmware for server products. Intel's advisory listed all the vulnerabilities (CVE-2024-22382; CVE-2024-23487; CVE-2024-24981; CVE-2024-23980; and CVE-2024-22095) as input validation flaws, with severity scores ranging from 7.2 to 7.5 on the CVSS scale.

Emerging AI Vulnerabilities

The Neural Compressor vulnerabilities are examples of what security analysts have recently described as the expanding — but often overlooked — attack surface that AI software and tools are creating at enterprise organizations. A lot of the security concerns around AI software so far have centered on the risks in using large language models and LLM-enabled chatbots like ChatGPT. Over the past year, researchers have released numerous reports on the susceptibility of these tools to model manipulation, jailbreaking, and several other threats.

What has been somewhat less of a focus so far has been the risk to organizations from vulnerabilities in some of the core software components and infrastructure used in building and supporting AI products and platforms. Researchers from Wiz, for instance, recently found weaknesses in the widely used HuggingFace platform that gave attackers a way to tamper with models in the registry or to relatively easily upload weaponized ones to it. A recent study commissioned by the UK's Department for Science, Innovation and Technology identified numerous potential cyber-risks to AI technology at every life cycle state from the software design phase through development, deployment, and maintenance. The risks include a failure to do adequate threat modeling and not accounting for secure authentication and authorization in the design phase to code vulnerabilities, insecure data handling, inadequate input validation, and a long list of other issues.

About the Author

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights