The Emerging AI Security Threat: 4 Ways To Prepare

Artificial intelligence represents a huge opportunity for cybercriminals to wreak havoc and extort organizations as AI becomes more pervasive.

4 Min Read
Sinister digital face
Sergey Nivens via Adobe Stock

When people talk about artificial intelligence (AI) and security, the conversation almost always revolves around how AI and machine learning can be applied to fighting malware and other malicious cyberattacks that threaten enterprises.

Take, for example, a recent survey in which 700 IT executives expressed nearly unanimous enthusiasm about AI's potential for transforming daily operations, products, and services at their companies. In fact, they cited detecting and blocking malware, along with predictive insights for network troubleshooting, as the most beneficial use cases AI will provide to their organizations.

That's great, as AI indeed holds enormous promise as a way to bolster cybersecurity posture. But there's another side to the AI security discussion that's only starting to get the attention it deserves: securing AI systems themselves.

Unfortunately, AI represents a huge stretch opportunity for cybercriminals to wreak havoc and extort organizations for ransom as the technology becomes more pervasive throughout companies and society in general. For that reason, many experts expect breaches of AI data and models to rise in the coming years.

As a Brookings Institution report put it: "Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences."

Furthermore, in the case of AI-based security solutions specifically, the report said, "If we rely on machine learning algorithms to detect and respond to cyberattacks, it is all the more important that those algorithms be protected from interference, compromise, or misuse."

Because most organizations are still relatively early on the AI adoption curve, however, they're only just starting to wrap their arms around the special security considerations of AI development and deployment.

AI attacks are different than traditional application or network breaches. Traditional breaches typically involve stealing and/or encrypting information, perhaps via ransomware, or taking control of a network through all too familiar means like denial-of-service attacks, DNS tunneling, etc. AI threats, on the other hand, have more to do with corruption of the large amounts of data used in training AI models.

Thus, to keep AI systems secure, organizations must understand and defend against the distinctive infiltration tactics adversaries may use, such as:

  • Poisoning attacks, in which hackers use malware to gain access during the AI model training phase and then tamper with the learning process by injecting inaccurate or mislabeled data that harms the trained model's accuracy.

  • Model stealing, where bad actors purloin model parameters by gaining access to source code repositories through phishing or weak passwords and hunt for model files.

  • Data extraction attacks, in which intruders use tricks to query a model to retrieve information about the training data.

Given these risks, it's essential that organizations don't delay rethinking their security ecosystems to safeguard AI data and models. Here are four steps to take right now.

Take Inventory
A company can't protect its AI models, algorithms, and systems unless it has a firm grasp on where they all are. Therefore, every organization should diligently develop and maintain a formal catalog of all its AI uses.

It's not easy work. "One bank made an inventory of all their models that use advanced or AI-powered algorithms and found a staggering total of 20,000," Brookings Institution said. But the effort is well worth it.

Organizations should treat AI models not as some IT outlier but as a hard asset to be tracked just as rigorously as a laptop or phone issued to an employee. Every company that has customized AI models proprietary to their business needs to know their whereabouts, chronicle their usage, and understand who has access. This level of discipline is the only way to maintain the rigor needed to properly protect AI systems.

AI is all about data, so companies should double down on their efforts to execute policies, procedures, and best practices for securing all enterprise data – including the entire AI ecosystem, from development to deployment.

As industry awareness of the AI security challenge grows, technologies and initiatives to help are bound to keep emerging. One example is Private Aggregation of Teacher Ensembles (PATE), an approach that defends AI systems from model duplicating techniques that can leak sensitive data. Companies should make it a priority to stay abreast of such developments.

By following these four steps, organizations can start getting ahead of the AI security threat and mitigate risks that, if unchecked, could prove very destructive as AI adoption rapidly accelerates.

About the Author(s)

Sharon Mandell

SVP and Chief Information Officer, Juniper

Sharon Mandell is the Senior Vice President and Chief Information Officer leading Juniper’s global information technology team. In this role, she leads the ongoing enhancement of the company’s IT infrastructure and applications architectures to support the growth objectives of the company. She and her team are also responsible for showcasing Juniper’s use of its technologies to the world.

Prior to joining Juniper in 2020, Mandell was the Chief Information Officer for TIBCO Software and previously developed her leadership strategy at Harmonic, Black Arrow (now Cadent), and Knight Ridder. Throughout her career, Mandell developed a level of expertise in cybersecurity and compliance, enterprise architecture and road mapping, data and analytics, digital transformation and customer service. She is passionate about supporting women in STEM careers and in her free time Mandell serves on various arts and education-related boards. She also proudly serves on the computer science advisory board at Temple University.

Samantha Madrid

VP of Security Business & Strategy, Juniper

Samantha Madrid is the Vice President of Security Business & Strategy at Juniper Networks. She is an expert in the enterprise security market with nearly two decades of experience in roles spanning sales engineering, product management and marketing.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights