Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

7/22/2020
02:00 PM
Praful Krishna
Praful Krishna
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

The InfoSec Barrier to AI

Information security challenges are proving to be a huge barrier for the artificial intelligence ecosystem. Conversely, AI is causing headaches for CISOs. Here's why.

Information security is all about keeping an organization's information secure and maintaining its integrity. Some time ago, all this meant was good locks on the doors, physical as well as virtual.

The first challenge to this philosophy started in the 2000s with software-as-a-service (SaaS), when vendors asked that companies volunteer data to shared servers. The information security industry responded with stronger encryptions and contracts. Enterprises added another layer by simply refusing to let the data outside their firewalls. They asked for on-premises deployments and private clouds instead. This was the origin of the debate that has not settled for decades … and counting!

Today, over the past three to five years, artificial intelligence (AI) has emerged as a force that has only complicated the arguments on both sides, adding completely new challenges to information security. Conversely, information security has become a big challenge to the growth of AI itself.

A Bird No Longer in Hand
CISOs have responded with agility to most aspects of AI's threat. For example, doing well with AI means relying on the construct of function-as-a-service (FaaS). Also called serverless computing, FaaS enables a user to start a program without worrying about hardware needs or distribution. While this is very useful for training AI models, provisioning for FaaS makes economic sense at enormous scales only. In other words, AI naturally begs for migration to public clouds, which is fine — data leaving the premises may cause a lot of headaches but it does not necessarily mean that the data is insecure. Since there is really no way around this, CISOs have stepped up. However, the concern lingers regarding what happens with this data outside the firewall.

AI learns the best on data from multiple customers in a similar situation. Many vendors have tried to get away with commingling at the model level. For example, IBM has assuring words about data privacy on its website, but it fails to mention that for many of Watson's products there is a single underlying AI model. Data from every customer, while individually secure, is used to train that single model.

The easier problem in this approach is to figure out exactly what these models are retaining that could be valuable. For example, consider a popular open source model from Google called Word2Vec. It converts each word in your corpus to a 400- to 1,000-dimension vector (an array of 400 or 1,000 numbers). The weights in the vector don't mean much per se; however, Google itself popularized this trick to demonstrate the power of Word2Vec: If you take the vectors for King, Man, and Woman, and do something like [King] – [Man] + [Woman], you get the vector for [Queen]. While this sounds fantastic and innocuous for general English, relationships like this may risk competitive insights for enterprises. Maybe all we wanted to do was to hide the identity of the queen.

To be fair, it is hard to reverse engineer insights from a trained model in most cases. Still, it is not impossible. CISOs have continually asked for more transparency to counter this.

The Real Unmet Challenge: Deterministic Behavior
The harder problem is of a completely different nature. Let us conduct a thought experiment. Say a CISO is confident about a vendor's security protocols and the integrity of its models. The CISO allows it to process the company's data and results start to flow. Say, 2 + 2 = 5, which is found to be generally acceptable. The company designs its systems around this; perhaps a downstream process offsets the result by -1. Most people would say AI has delivered.

This is when we come to the real problem — that of deterministic behavior. Based on the success at this company, the vendor markets the model to a competitor. The AI model gets fresh data. It is smart enough to spot the error in its ways. It learns. It changes the output to 2 + 2 = 4. On the surface, this sounds like another win for AI, but this is a big problem for enterprises. All the investments around the model that our CISO's company made are now useless. The company must recalibrate, reinvest in the downstream systems, or, in the worst case, live with an erroneous output of the process. The rise of AI has added a completely new dimension to CISOs' worries. An AI model's integrity and consistency — its deterministic behavior — is now equally important. Continuously evolving, shared AI models do not come with a guarantee for reliability. At least not yet.

The Barrier for AI
The debate between proponents of multitenant solutions and those of bespoke implementations was raging well before AI added this new twist. In some quarters, it was popular to tag CISOs as conservative if they did not share data. Now it seems simply prudent. The bigger implication of this problem is for the AI industry. AI as a product is proving to be a myth. AI companies must develop very differently than their SaaS brethren. However, the funding and scaling models in venture capital communities are geared for SaaS. In short, information security challenges are proving to be a huge barrier for the AI ecosystem.

Maybe it is the information security industry that eventually solves this problem. Perhaps there are protocols and encodings ahead that force the idea of deterministic behavior to AI. Until then, everyone is in an unfortunate pickle of determinism.

Related Content:

 

 

Register now for this year's fully virtual Black Hat USA, scheduled to take place August 1–6, and get more information about the event on the Black Hat website. Click for details on conference information and to register.

Praful is an AI product leader who helps Fortune 500s conceive and implement digitalization strategies using advanced analytics. After almost a decade in the field, Praful's strengths lie in discovering enterprise needs, translating them into data science priorities, building ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
Former CISA Director Chris Krebs Discusses Risk Management & Threat Intel
Kelly Sheridan, Staff Editor, Dark Reading,  2/23/2021
Edge-DRsplash-10-edge-articles
Security + Fraud Protection: Your One-Two Punch Against Cyberattacks
Joshua Goldfarb, Director of Product Management at F5,  2/23/2021
News
Cybercrime Groups More Prolific, Focus on Healthcare in 2020
Robert Lemos, Contributing Writer,  2/22/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: "The truth behind Stonehenge...."
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Building the SOC of the Future
Building the SOC of the Future
Digital transformation, cloud-focused attacks, and a worldwide pandemic. The past year has changed the way business works and the way security teams operate. There is no going back.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-21513
PUBLISHED: 2021-03-02
Dell EMC OpenManage Server Administrator (OMSA) version 9.5 Microsoft Windows installations with Distributed Web Server (DWS) enabled configuration contains an authentication bypass vulnerability. A remote unauthenticated attacker could potentially exploit this vulnerability to gain admin acces...
CVE-2021-21514
PUBLISHED: 2021-03-02
Dell EMC OpenManage Server Administrator (OMSA) versions 9.5 and prior contain a path traversal vulnerability. A remote user with admin privileges could potentially exploit this vulnerability to view arbitrary files on the target system by sending a specially crafted URL request.
CVE-2020-25902
PUBLISHED: 2021-03-02
Blackboard Collaborate Ultra 20.02 is affected by a cross-site scripting (XSS) vulnerability. The XSS payload will execute on the class room, which leads to stealing cookies from users who join the class.
CVE-2020-1936
PUBLISHED: 2021-03-02
A cross-site scripting issue was found in Apache Ambari Views. This was addressed in Apache Ambari 2.7.4.
CVE-2021-27904
PUBLISHED: 2021-03-02
An issue was discovered in app/Model/SharingGroupServer.php in MISP 2.4.139. In the implementation of Sharing Groups, the "all org" flag sometimes provided view access to unintended actors.