Threat Intelligence

7/26/2018
10:30 AM
Rodney Joffe
Rodney Joffe
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

The Double-Edged Sword of Artificial Intelligence in Security

AI is revolutionizing cybersecurity for both defenders and attackers as hackers, armed with the same weaponized technology, create a seemingly never-ending arms race.

As artificial intelligence capabilities continue to grow at a rapid pace, AI technologies are becoming ubiquitous for both protecting against cyberattacks and also as an instrument for launching them. Last year, Gartner predicted that almost every new software product would implement AI by 2020. The advancements in AI and its ability to make automated decisions about cyber threats is revolutionizing the cybersecurity landscape as we know it, from both a defensive and an offensive perspective.

AI in Cyber Defense
As a subdivision of AI, machine learning is already easing the burden of threat detection for many cyber defense teams. Its ability to analyze network traffic and establish a baseline for normal activity within a system can be used to flag suspicious activity, drawing from vast amounts of security data collected by businesses. Anomalies are then fed back to security teams, which make the final decision on how to react. 

Machine learning is also able to classify malicious activity on different layers. For example, at the network layer, it can be applied to intrusion detection systems, in order to categorize classes of attacks like spoofing, denial of service, data modification, and so on. Machine learning can also be applied at the web application layer and at endpoints to pinpoint malware, spyware, and ransomware.

AI/machine learning is already here to stay as a key component in a security team's toolbox, particularly given that attacks at every level are becoming more frequent and targeted.

AI and Cybercriminals
Even though implementing machine learning technologies is an asset for defense teams, hackers are armed with the very same ammunition and capabilities, creating a seemingly never-ending arms race.

At the beginning of 2018, the Electronic Frontier Foundation's "The Malicious Use of Artificial Intelligence" report warned that AI can be exploited by hackers for malicious purposes, including the ability to target entire states and alter society as we know it. The authors of the report contend that globally we are at "a critical moment in the co-evolution of AI and cybersecurity and should proactively prepare for the next wave of attacks." They point to the alleged attacks by Russian actors in manipulating social media in a highly targeted manner as a current example of this threat. 

It's no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency. Similar to the way machine learning can be used to monitor network traffic and analyze data for cyber defense, it can also be used to make automated decisions on who, what, and when to attack. There is potential for hackers to use AI in order to alter an organization's data, as opposed to stealing it outright, causing serious damage to a brand's reputation, profits, and share price. In fact, cybercriminals are already able to utilize AI to mold personalized phishing attacks by collecting information on targets from social media and other publicly available sources.

Guarding Against the "Weaponization" of AI
To protect against AI-launched attacks, security teams should be mindful of three key steps to cement a strong defense:

Step 1: Understand what AI is protecting.
Identify the specific attacks that are you protecting against and what AI or machine learning technologies you have in place to guard against these attacks. Once teams lay this out clearly, they can implement appropriate solutions for patch management and threat vulnerability management to ensure that important data is encrypted and there is sufficient visibility into the whole environment. It is vital that an option exists to rapidly change course when it comes to defense because the target is always moving.

Step 2: Have clearly defined processes in place.
Organizations that have the best technology in the world are only as effective as the process they model. The key here is to make sure both security teams and the wider organization understand procedures that are in place. It is the responsibility of the security teams to educate employees on cybersecurity best practices.

Step 3: Know exactly what is normal for the security environment.
Having context around attacks is crucial but this is often where companies fail. By possessing a clear understanding of assets and how they communicate will allow organizations to correctly isolate events that aren't normal and investigate them. Ironically, machine learning is an extremely effective tool for providing this context. To safeguard against the weaponization of AI, organizations must build a robust architecture on which the technology operates and be mindful that the right education internally is key to staying a step ahead of attackers.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Rodney Joffe has been a sought-after cybersecurity expert who, among other notable accomplishments, leads the Conficker Working Group to protect the world from the Conficker worm. Providing guidance and knowledge to organizations from the United States government to the ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:35:08 PM
AI as a tool
It's no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency I agree with this, AI is just a tool, it can be bused for good and bad purposes.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:24:34 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
Fuzzy Logic have the capability to generate astonishing and highly useful results; but those results come from a new kind of collective human intelligence Agree with this. With AI we are trying to take the human out of this equation.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:19:59 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
form of automation - and their "decisions" are predetermined by programming AI is less about programming but more about training. Basically training produces program for us.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:14:50 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
Yet, even these are apt to let some important components of their "HI" (Human Intelligence), For me the ultimate goal of AI is to simulate HI (human intelligence)
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:12:02 PM
AI in cybersecurity
Based on the amount of data that needs to be porcessed daily ther is no practical way of managing security than using an automated system. No amount of human resource will be enough to manage the security talks in todays world sine that is huge and requires more efficient way of analyzing it.
BrianN060
50%
50%
BrianN060,
User Rank: Ninja
7/27/2018 | 11:50:49 AM
The unfortunate choice of the term: "Artificial Intelligence"
I'd guess most readers here have a pretty good (some even an expert's), idea of the mechanisms within the technologies categorized as "AI".  Yet, even these are apt to let some important components of their "HI" (Human Intelligence), lead them to the same false implications as the layman; in a way similar to that which brought Percival Lowell to interpret the observation of "canali" (grooves or channels), by an Italian astronomer, as evidence of "MI" (Martian Intelligence).  Just as Carl Sagan pointed out, the real intelligence [perception, imagination and creativity] was on Lowell's end of the telescope.

https://en.wikipedia.org/wiki/File:Karte_Mars_Schiaparelli_MKL1888.png 

https://en.wikipedia.org/wiki/File:Lowell_Mars_channels.jpg

The article has "...AI and its ability to make automated decisions..."; which is fine, as long as you understand that AI technologies are a form of automation - and their "decisions" are predetermined by programming.   Programs don't "decide" any more than an electron decides which path to take through a transistor.  Incorporation of powerful statistical processes, unprecedented computational power and Fuzzy Logic have the capability to generate astonishing and highly useful results; but those results come from a new kind of collective human intelligence - nothing artificial about it. 

Keep this in mind, and we're less apt to wonder down the road that leads to "UFO" implying spacecraft of extraterrestrial origin (rather than that not all objects observed flying are identified); or that recognizing that the earth's climate has always been changing should implies that we buy into a particular catechism of assumptions known as "climate change". 
12 Free, Ready-to-Use Security Tools
Steve Zurier, Freelance Writer,  10/12/2018
Most IT Security Pros Want to Change Jobs
Dark Reading Staff 10/12/2018
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-10839
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
CVE-2018-13399
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
CVE-2018-18381
PUBLISHED: 2018-10-16
Z-BlogPHP 1.5.2.1935 (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
CVE-2018-18382
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
CVE-2018-18374
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.