Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

10/16/2019
11:00 AM
Steve Durbin
Steve Durbin
Steve Durbin
50%
50%

Artificial Intelligence & Cybersecurity: Making It Work for Your Organization

Artificial intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Just like humans, they will be imperfect, but also capable of achieving great things.

Artificial intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Just like humans, they will be imperfect, but also capable of achieving great things.

AI presents new information risks and makes some existing ones more perilous. However, it can also be used for good and must become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.

Already, AI is finding its way into many mainstream business use cases. Organizations use variations of AI to support processes in areas including customer service, human resources and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI is and what it actually means for business and security. It is difficult to separate wishful thinking from reality.

Defensive opportunities provided by AI
As AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Security practitioners are always fighting to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still -- as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfill one type of task. They require enough data and inputs in order to complete that task.

One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously -- an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Questions to ask:

  • Is the problem bounded? (i.e. can it be addressed with one dataset or type of input, or does it require a high understanding of context, which humans are usually better at providing?)
  • Does the organization have the data required to run and optimize the AI system?

Security leaders also need to consider issues of governance around defensive AI, including:

    • How do defensive AI systems fit into organizational security governance structures?

 

    • How can the organization provide security assurance for defensive AI systems?

 

    • How can defensive AI systems be maintained, backed up, tested and patched?

 

  • Does the organization have sufficiently skilled people to provide oversight for defensive AI systems?

AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions.

AI systems will make mistakes -- a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI's decision-making process. Of course, humans make mistakes too -- organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organization's cyber defenses.

The time to prepare is now
The speed and scale at which AI systems "think" will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware -- and at that point, defensive AI will not just be a "nice to have": It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks.

To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses.

— Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cybersecurity, BYOD, the cloud and social media across both the corporate and personal environments. Previously, he was Senior Vice President at Gartner.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
'BootHole' Vulnerability Exposes Secure Boot Devices to Attack
Kelly Sheridan, Staff Editor, Dark Reading,  7/29/2020
Average Cost of a Data Breach: $3.86 Million
Jai Vijayan, Contributing Writer,  7/29/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-18112
PUBLISHED: 2020-08-05
Affected versions of Atlassian Fisheye allow remote attackers to view the HTTP password of a repository via an Information Disclosure vulnerability in the logging feature. The affected versions are before version 4.8.3.
CVE-2020-15109
PUBLISHED: 2020-08-04
In solidus before versions 2.8.6, 2.9.6, and 2.10.2, there is an bility to change order address without triggering address validations. This vulnerability allows a malicious customer to craft request data with parameters that allow changing the address of the current order without changing the shipm...
CVE-2020-16847
PUBLISHED: 2020-08-04
Extreme Analytics in Extreme Management Center before 8.5.0.169 allows unauthenticated reflected XSS via a parameter in a GET request, aka CFD-4887.
CVE-2020-15135
PUBLISHED: 2020-08-04
save-server (npm package) before version 1.05 is affected by a CSRF vulnerability, as there is no CSRF mitigation (Tokens etc.). The fix introduced in version version 1.05 unintentionally breaks uploading so version v1.0.7 is the fixed version. This is patched by implementing Double submit. The CSRF...
CVE-2020-13522
PUBLISHED: 2020-08-04
An exploitable arbitrary file delete vulnerability exists in SoftPerfect RAM Disk 4.1 spvve.sys driver. A specially crafted I/O request packet (IRP) can allow an unprivileged user to delete any file on the filesystem. An attacker can send a malicious IRP to trigger this vulnerability.