Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

// // //
3/15/2018
09:35 AM
Dawn Kawamoto
Dawn Kawamoto
Dawn Kawamoto

AI: An Emerging Insider Threat?

As artificial intelligence increasingly gains a presence in the enterprise, concerns are already being raised of a new insider threat where AI will turn against its operators. How can security experts address this "frenemy"?

Machines being smarter than humans has already played out across national TV and the Internet, from IBM's supercomputer Watson defeating renowned Jeopardy! game contestants to Google's AlphaGo beating the world's best Go game player.

And while the closely watched Jeopardy and Go competitions demonstrated how computers raised on machine learning and packed with artificial intelligence can surpass human intelligence and hold vast potential for delivering good to society, concerns have also emerged about the darker side of the technology.

Visionary Elon Musk, for one, has described AI as "summoning the demon" and it could be humanity's "biggest existential threat," according to a video posting in the Washington Post of his presentation at MIT.

(Source: iStock)
(Source: iStock)

And Google DeepMind, creator of AlphaGo, and Stuart Armstrong, an Alexander Tamas Programme on AI Safety research fellow at Oxford University, warned of the need for a "red button" to disable an AI agent if it learned to avoid being interrupted or stopped by its operator in their report, "Safely Interruptible Agents."

Should IT security professionals be worried about machine learning and AI potentially becoming the new insider threat?

AI researchers, including Armstrong and Shahar Avin, a research associate with Cambridge University's Centre for the Study of Existential Risk and co-author of "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," and Sam Bouso, CEO of Precognitive, an IT security company that uses machine learning and AI, weigh in on this issue.

Near-term outlook
AI is not likely to emerge as a new intentional insider threat in the near term, experts say, pointing to several factors at play.

"I don't see why AI is specifically an insider threat. In the short term, AI could amplify the capabilities of bad actors and possibly of good actors, whether or not they are insiders or not. In the medium term, AI could automate social engineering approaches such as spear phishing, but that's not specifically an insider threat either," Armstrong told Security Now.

Bouso told Security Now that AI is decades away from becoming a credible insider threat, noting that currently the technology is not at a point where it can think on its own and decide between good and bad.

"AI and ML are certainly being used for cybercrime but there is someone behind it when it happens, and they have a specific intent to do harm," Bouso said. "Any sort of rogue AI is nothing more than bad programming and human error at this point."

Indeed. Avin, in an interview with Security Now, noted that while machine learning and AI systems currently do not have a "will" of their own, it is up to companies to set the right policies and restrictions in place to guide the machine's learning. Unfortunately, that may prove challenging.

"We don't yet have robust tools to fully reason through all possible policies the system might take, and so [IT security professionals] should be extra careful with the inputs and outputs connected to such systems, and adversarially test them through red teaming efforts," Avin advised.

The "black-box" nature of many contemporary AI and machine learning algorithms, coupled with their ability to discover novel behaviors or policies within an environment or goal, should prompt companies to give additional care when introducing them to corporate networks and systems -- especially those that contain sensitive or confidential information, Avin noted.

And although IT security folks are concerned and discuss AI and its potential for self-awareness and malicious intent, Bouso said it's viewed along the lines of "one day this could happen."

Long-term outlook
Although cyber attackers are currently using machine learning and AI to wage war on companies and consumers, future development of the technology may one day lead to these technologies becoming IT security professionals' "frenemies."

"Deploying malicious code seems to be an extreme end, and unlikely until we have much more capable systems which are much more trusted," Avin said. "However, it is not unreasonable that a Q&A system will disclose information it was not intended to disclose, or, perhaps more worryingly, an automated configuration generator will create configs that exploit edge cases in the system being configured, such that a metric is maximized, but the system itself fails."

Researchers are focused on further machine learning and AI enhancements to make capable systems that are more trusted, a move the security industry is clamoring for as it faces a steep shortage of workers and fatigue from a constant barrage of alerts from their security operations center (SOC).

Development is underway for artificial general intelligences (AGIs), as an example.

"Before we get true AGIs, I expect any bugs created by AIs to be minor and non-deliberate. If and when AIs become AGIs, then I expect them to have enough knowledge and learning abilities to deploy malicious code if motivated to do that -- it would have most of the capabilities of humans, and be better in many areas," Armstrong said. "But as I said before, if we assume true AGIs, then civilization will be radically changed anyway."

Related posts:

— Dawn Kawamoto is an award-winning technology and business journalist, whose work has appeared in CNET's News.com, Dark Reading, TheStreet.com, AOL's DailyFinance, and The Motley Fool.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Everything You Need to Know About DNS Attacks
It's important to understand DNS, potential attacks against it, and the tools and techniques required to defend DNS infrastructure. This report answers all the questions you were afraid to ask. Domain Name Service (DNS) is a critical part of any organization's digital infrastructure, but it's also one of the least understood. DNS is designed to be invisible to business professionals, IT stakeholders, and many security professionals, but DNS's threat surface is large and widely targeted. Attackers are causing a great deal of damage with an array of attacks such as denial of service, DNS cache poisoning, DNS hijackin, DNS tunneling, and DNS dangling. They are using DNS infrastructure to take control of inbound and outbound communications and preventing users from accessing the applications they are looking for. To stop attacks on DNS, security teams need to shore up the organization's security hygiene around DNS infrastructure, implement controls such as DNSSEC, and monitor DNS traffic
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-33196
PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences. Cross site scripting (XSS) can be triggered by review volumes. This issue has been fixed in version 4.4.7.
CVE-2023-33185
PUBLISHED: 2023-05-26
Django-SES is a drop-in mail backend for Django. The django_ses library implements a mail backend for Django using AWS Simple Email Service. The library exports the `SESEventWebhookView class` intended to receive signed requests from AWS to handle email bounces, subscriptions, etc. These requests ar...
CVE-2023-33187
PUBLISHED: 2023-05-26
Highlight is an open source, full-stack monitoring platform. Highlight may record passwords on customer deployments when a password html input is switched to `type="text"` via a javascript "Show Password" button. This differs from the expected behavior which always obfuscates `ty...
CVE-2023-33194
PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences on the web.The platform does not filter input and encode output in Quick Post validation error message, which can deliver an XSS payload. Old CVE fixed the XSS in label HTML but didn’t fix it when clicking save. This issue was...
CVE-2023-2879
PUBLISHED: 2023-05-26
GDSDB infinite loop in Wireshark 4.0.0 to 4.0.5 and 3.6.0 to 3.6.13 allows denial of service via packet injection or crafted capture file