Threat Intelligence

7/26/2018
10:30 AM
Rodney Joffe
Rodney Joffe
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

The Double-Edged Sword of Artificial Intelligence in Security

AI is revolutionizing cybersecurity for both defenders and attackers as hackers, armed with the same weaponized technology, create a seemingly never-ending arms race.

As artificial intelligence capabilities continue to grow at a rapid pace, AI technologies are becoming ubiquitous for both protecting against cyberattacks and also as an instrument for launching them. Last year, Gartner predicted that almost every new software product would implement AI by 2020. The advancements in AI and its ability to make automated decisions about cyber threats is revolutionizing the cybersecurity landscape as we know it, from both a defensive and an offensive perspective.

AI in Cyber Defense
As a subdivision of AI, machine learning is already easing the burden of threat detection for many cyber defense teams. Its ability to analyze network traffic and establish a baseline for normal activity within a system can be used to flag suspicious activity, drawing from vast amounts of security data collected by businesses. Anomalies are then fed back to security teams, which make the final decision on how to react. 

Machine learning is also able to classify malicious activity on different layers. For example, at the network layer, it can be applied to intrusion detection systems, in order to categorize classes of attacks like spoofing, denial of service, data modification, and so on. Machine learning can also be applied at the web application layer and at endpoints to pinpoint malware, spyware, and ransomware.

AI/machine learning is already here to stay as a key component in a security team's toolbox, particularly given that attacks at every level are becoming more frequent and targeted.

AI and Cybercriminals
Even though implementing machine learning technologies is an asset for defense teams, hackers are armed with the very same ammunition and capabilities, creating a seemingly never-ending arms race.

At the beginning of 2018, the Electronic Frontier Foundation's "The Malicious Use of Artificial Intelligence" report warned that AI can be exploited by hackers for malicious purposes, including the ability to target entire states and alter society as we know it. The authors of the report contend that globally we are at "a critical moment in the co-evolution of AI and cybersecurity and should proactively prepare for the next wave of attacks." They point to the alleged attacks by Russian actors in manipulating social media in a highly targeted manner as a current example of this threat. 

It's no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency. Similar to the way machine learning can be used to monitor network traffic and analyze data for cyber defense, it can also be used to make automated decisions on who, what, and when to attack. There is potential for hackers to use AI in order to alter an organization's data, as opposed to stealing it outright, causing serious damage to a brand's reputation, profits, and share price. In fact, cybercriminals are already able to utilize AI to mold personalized phishing attacks by collecting information on targets from social media and other publicly available sources.

Guarding Against the "Weaponization" of AI
To protect against AI-launched attacks, security teams should be mindful of three key steps to cement a strong defense:

Step 1: Understand what AI is protecting.
Identify the specific attacks that are you protecting against and what AI or machine learning technologies you have in place to guard against these attacks. Once teams lay this out clearly, they can implement appropriate solutions for patch management and threat vulnerability management to ensure that important data is encrypted and there is sufficient visibility into the whole environment. It is vital that an option exists to rapidly change course when it comes to defense because the target is always moving.

Step 2: Have clearly defined processes in place.
Organizations that have the best technology in the world are only as effective as the process they model. The key here is to make sure both security teams and the wider organization understand procedures that are in place. It is the responsibility of the security teams to educate employees on cybersecurity best practices.

Step 3: Know exactly what is normal for the security environment.
Having context around attacks is crucial but this is often where companies fail. By possessing a clear understanding of assets and how they communicate will allow organizations to correctly isolate events that aren't normal and investigate them. Ironically, machine learning is an extremely effective tool for providing this context. To safeguard against the weaponization of AI, organizations must build a robust architecture on which the technology operates and be mindful that the right education internally is key to staying a step ahead of attackers.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Rodney Joffe has been a sought-after cybersecurity expert who, among other notable accomplishments, leads the Conficker Working Group to protect the world from the Conficker worm. Providing guidance and knowledge to organizations from the United States government to the ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:35:08 PM
AI as a tool
It's no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency I agree with this, AI is just a tool, it can be bused for good and bad purposes.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:24:34 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
Fuzzy Logic have the capability to generate astonishing and highly useful results; but those results come from a new kind of collective human intelligence Agree with this. With AI we are trying to take the human out of this equation.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:19:59 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
form of automation - and their "decisions" are predetermined by programming AI is less about programming but more about training. Basically training produces program for us.
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:14:50 PM
Re: The unfortunate choice of the term: "Artificial Intelligence"
Yet, even these are apt to let some important components of their "HI" (Human Intelligence), For me the ultimate goal of AI is to simulate HI (human intelligence)
Dr.T
50%
50%
Dr.T,
User Rank: Ninja
7/29/2018 | 12:12:02 PM
AI in cybersecurity
Based on the amount of data that needs to be porcessed daily ther is no practical way of managing security than using an automated system. No amount of human resource will be enough to manage the security talks in todays world sine that is huge and requires more efficient way of analyzing it.
BrianN060
50%
50%
BrianN060,
User Rank: Ninja
7/27/2018 | 11:50:49 AM
The unfortunate choice of the term: "Artificial Intelligence"
I'd guess most readers here have a pretty good (some even an expert's), idea of the mechanisms within the technologies categorized as "AI".  Yet, even these are apt to let some important components of their "HI" (Human Intelligence), lead them to the same false implications as the layman; in a way similar to that which brought Percival Lowell to interpret the observation of "canali" (grooves or channels), by an Italian astronomer, as evidence of "MI" (Martian Intelligence).  Just as Carl Sagan pointed out, the real intelligence [perception, imagination and creativity] was on Lowell's end of the telescope.

https://en.wikipedia.org/wiki/File:Karte_Mars_Schiaparelli_MKL1888.png 

https://en.wikipedia.org/wiki/File:Lowell_Mars_channels.jpg

The article has "...AI and its ability to make automated decisions..."; which is fine, as long as you understand that AI technologies are a form of automation - and their "decisions" are predetermined by programming.   Programs don't "decide" any more than an electron decides which path to take through a transistor.  Incorporation of powerful statistical processes, unprecedented computational power and Fuzzy Logic have the capability to generate astonishing and highly useful results; but those results come from a new kind of collective human intelligence - nothing artificial about it. 

Keep this in mind, and we're less apt to wonder down the road that leads to "UFO" implying spacecraft of extraterrestrial origin (rather than that not all objects observed flying are identified); or that recognizing that the earth's climate has always been changing should implies that we buy into a particular catechism of assumptions known as "climate change". 
Higher Education: 15 Books to Help Cybersecurity Pros Be Better
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
'PowerSnitch' Hacks Androids via Power Banks
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/8/2018
Worst Password Blunders of 2018 Hit Organizations East and West
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
10 Best Practices That Could Reshape Your IT Security Department
This Dark Reading Tech Digest, explores ten best practices that could reshape IT security departments.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-1848
PUBLISHED: 2018-12-14
IBM Business Automation Workflow 18.0.0.0 and 18.0.0.1 is vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-Force...
CVE-2018-1977
PUBLISHED: 2018-12-14
IBM DB2 for Linux, UNIX and Windows 11.1 (includes DB2 Connect Server) contains a denial of service vulnerability. A remote, authenticated DB2 user could exploit this vulnerability by issuing a specially-crafted SELECT statement with TRUNCATE function. IBM X-Force ID: 154032.
CVE-2018-18006
PUBLISHED: 2018-12-14
Hardcoded credentials in the Ricoh myPrint application 2.9.2.4 for Windows and 2.2.7 for Android give access to any externally disclosed myPrint WSDL API, as demonstrated by discovering API secrets of related Google cloud printers, encrypted passwords of mail servers, and names of printed files.
CVE-2018-18984
PUBLISHED: 2018-12-14
Medtronic CareLink 2090 Programmer CareLink 9790 Programmer 29901 Encore Programmer, all versions, The affected products do not encrypt or do not sufficiently encrypt the following sensitive information while at rest PII and PHI.
CVE-2018-19003
PUBLISHED: 2018-12-14
GE Mark VIe, EX2100e, EX2100e_Reg, and LS2100e Versions 03.03.28C to 05.02.04C, EX2100e All versions prior to v04.09.00C, EX2100e_Reg All versions prior to v04.09.00C, and LS2100e All versions prior to v04.09.00C The affected versions of the application have a path traversal vulnerability that fails...