10 Types of AI Attacks CISOs Should Track
Risk from artificial intelligence vectors presents a growing concern among security professionals in 2023.
May 18, 2023
![Man crossing a chasm on a hanging bridge, surrounded by clouds Man crossing a chasm on a hanging bridge, surrounded by clouds](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blta2ab661466ff576c/64f1750a0b67f36fb0706c42/chasmss-Dmitry_Burlakov-alamy.jpg?width=700&auto=webp&quality=80&disable=upscale)
Source: Dmitry Burlakov via Alamy Stock Photo
As CISOs work to future proof their cybersecurity strategy and infrastructure for tomorrow's emerging threats, artificial intelligence (AI) attacks are looming large in their thoughts. Even without the hype that's billowed around ChatGPT and generative AI's skyrocketing popularity, AI risk has started to unfold as a growing concern among security researchers and pundits in 2023.
Security advocates are warning CISOs that they're fighting a two-front war when it comes to AI risk and resilience. Not only do they need to be wary of the threat posed by adversarial AI attacks against enterprise deployments of AI and machine learning (ML) models, but they must also defend themselves from a greater volume of attacks fueled by the bad guys' use of AI in their offensive campaigns.
One of the attack types that should concern CISOs from a data and process integrity standpoint are poisoning attacks. The principle behind these attacks is that by manipulating the data which a deep learning model trains upon, an attacker can either corrupt the model (untargeted) or even manipulate its output to produce favorable results for the attacker (targeted). Model poisoning is just the tip of the iceberg in the corpus of data integrity issues that threaten AI integrity. Large language models (LLM) are under the microscope of risk and resiliency researchers as they explore how issues like feedback loops and AI bias can make AI output unreliable.
Being still very deeply rooted in academia, the field of data science and AI/ML research rests upon a high degree of collaborative and iterative development that relies on a lot of sharing — be it data sharing or sharing of models. This can introduce significant risk to the AI supply chain, much in the same way that appsec folks are dealing with software supply chain security issues.
Recent research shows a proof-of-concept of how attackers could potentially embed malicious code into pretrained machine learning models to carry out a ransomware attack against an organization utilizing ML models from public repositories. Attackers could do this by hijacking a legitimate model on a repo, maliciously crafting it and then reloading that model.
Some of the biggest risks around AI are actually data security and data privacy threats. If AI models aren't built with sufficient privacy measures, it is possible for an attacker to compromise the confidentiality of the data used to train those models. Some attacks, like membership inference, can query models in such a way as to figure out whether a particular piece of data is used in a model — which could be very problematic in healthcare situations where it'd be possible to infer someone has a specific disease if the inference attack confirmed they're in an AI model that studies that analyzes some facet of that disease.
Meantime, training data extraction attacks like model inversion can actually reconstruct training data. This is a challenge because "an ML system that is trained up on confidential or sensitive data will have some aspects of those data built right into it through training," explains Gary McGraw, co-founder of Berryville Institute of Machine Learning.
It's not just data that can be stolen from AI/ML deployments. Attackers can also potentially steal the special sauce of how a particular AI/ML model works through various types of model theft attacks. Attackers are most likely to look for straightforward measure such as breaking into private source code repositories through phishing or password attacks to steal models outright.
But researchers have also explored how attackers could potentially employ model extraction attacks against models that they couldn't access quite so simply. These attacks are able to reconstruct how a model predicts something by systematically querying that model. This could be of concern for CISOs at organizations that make significant in-house investments to proprietary AI models that are tied tightly into a core product.
A panel of experts took the stage at the 2023 RSA Conference to discuss the coming AI risk and resilience issues that CISOs are going to tackle in the coming couple of years. The discussion was broad and wide-ranging, but one of the specific emerging attack types that popped up in the discussion was what is called a sponge attack. In this attack type, adversaries can essentially conduct a denial of service attack against an AI model by specially crafting input to burn up the model's use of hardware consumption.
"It's essentially where you're taking the parameters of the model or the layers within the neural network and you're essentially trying to cause the neural network to use more compute to the point of potentially exceeding the compute available to the system and taking the system down," explained Neil Serebryany, CEO of CalypsoAI.
The old developer's maxim is to never trust user input. Doing so begets attacks like SQL injection and cross-site scripting. Injection attacks against regular applications are already prevalent enough — it still occupies the number three slot on the latest OWASP Top 10. Now with generative AI entering the mix, CISOs are going to also have to worry about prompt injection.
Prompt injection is the use of maliciously crafted prompts into generative AI to elicit incorrect, inaccurate, and even potentially offensive responses. This could be particularly mettlesome as developers fold ChatGPT and other Large Language Models (LLMs) into their applications so that a user's prompt is crunched by the AI and triggers some other action, such as posting content to a website or crafting automated emails that could potentially include incorrect or incendiary messages.
Evasion attacks are some of the most well-known adversarial AI attacks out there because some of them can be so deviously simple that they're fun to talk about. These attacks are the ones that can evade detection or classification systems — things like facial recognition or autonomous vehicle vision systems — through some visual trickery. For example, the use of maliciously crafted stickers on a stop sign could get a self-driving car to fail to read it properly.
More recently, an attack highlighted at the Machine Learning Evasion Competition (MLSEC 2022) got an AI facial recognition system to slightly modify celebrity photos to have them be recognized as completely different people.
Most of the attacks detailed here so far have been attacks lobbed against enterprise use of AI. The bad guys will not only be probing the flaws in AI in their future endeavors, but they'll also likely be utilizing AI to bolster their attacks against all types of enterprise applications and systems.
One of the gimmes for the attackers is already ramping up, namely the use of generative AI like ChatGPT to automate the creation of phishing emails. Security researchers are already reporting a ramp-up in phishing volume and effectiveness since the ready availability of ChatGPT online. This is a huge concern and made it to the SANS Top 5 Most Dangerous Cyberattacks for 2023 list.
ChatGPT has broken the deepfake attack threat completely out of the realm of the theoretical and into the world of practical attacks. CISOs should be working on awareness efforts to help their workers understand that AI-generated media like voice and video are getting easier to produce than ever, making it very easy to impersonate a CEO or other executive in order to convince workers to fall for business email compromise and other scams that involve the transfer of large sums of money. This is only going to serve to amplify the already growing threat of BECs.
Security researchers expect attackers to increasingly lean on generative AI to help them craft malware and quickly discover vulnerabilities in targeted systems to speed up and scale their attacks even further than they've already been doing with other automated technology. This is another one on the SANS Top 5 Most Dangerous Cyberattacks for 2023 list.
At RSAC 2023, Steven Sims, offensive operations curriculum lead for SANS and a longtime vulnerability researcher and exploit developer, demonstrated how easy it will be for even the most nontechnical criminal to get ChatGPT to generate ransomware code and to discover a zero-day flaw in a specific piece of code.
Security researchers expect attackers to increasingly lean on generative AI to help them craft malware and quickly discover vulnerabilities in targeted systems to speed up and scale their attacks even further than they've already been doing with other automated technology. This is another one on the SANS Top 5 Most Dangerous Cyberattacks for 2023 list.
At RSAC 2023, Steven Sims, offensive operations curriculum lead for SANS and a longtime vulnerability researcher and exploit developer, demonstrated how easy it will be for even the most nontechnical criminal to get ChatGPT to generate ransomware code and to discover a zero-day flaw in a specific piece of code.
As CISOs work to future proof their cybersecurity strategy and infrastructure for tomorrow's emerging threats, artificial intelligence (AI) attacks are looming large in their thoughts. Even without the hype that's billowed around ChatGPT and generative AI's skyrocketing popularity, AI risk has started to unfold as a growing concern among security researchers and pundits in 2023.
Security advocates are warning CISOs that they're fighting a two-front war when it comes to AI risk and resilience. Not only do they need to be wary of the threat posed by adversarial AI attacks against enterprise deployments of AI and machine learning (ML) models, but they must also defend themselves from a greater volume of attacks fueled by the bad guys' use of AI in their offensive campaigns.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024