Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

3/21/2019
02:30 PM
Satish Abburi
Satish Abburi
Commentary
100%
0%

Hacker AI vs. Enterprise AI: A New Threat

Artificial intelligence and machine learning are being weaponized using the same logic and functionality that legitimate organizations use.

The adversarial use of artificial intelligence (AI) and machine learning (ML) in malicious ways by attackers may be embryonic, but the prospect is becoming real. It's evolutionary: AI and ML gradually have found their way out of the labs and deployed for security defenses, and now they're increasingly being weaponized to overcome these defenses by subverting the same logic and underlying functionality.

Hackers and CISOs alike have access to the power of these developments, some of which are turning into off-the-shelf offerings that are plug-and-play capabilities enabling hackers to get up and running quickly. It was only a matter of time before hackers started taking advantage of the flexibility of AI to find weaknesses as enterprises roll it out in their defensive strategies.

The intent of intelligence-based assaults remains the same as "regular" hacking. They could be politically motivated incursions, nation-state attacks, enterprise attacks to exfiltrate intellectual property, or financial services attacks to steal funds — the list is endless. AI and ML are normally considered a force for good. But in the hands of bad actors, they can wreak serious damage. Are we heading toward a future where bots will battle each other in cyberspace?

When Good Software Turns Bad
Automated penetration testing using ML is a few years old. Now, tools such as Deep Exploit can be used by adversaries to pen test their targeted organizations and find open holes in defenses in 20 to 30 seconds — it used to take hours. ML models speed the process by quickly ingesting data, analyzing it, and producing results that are optimized for the next stage of attack.

Cloud computing and access to powerful CPUs/GPUs are increasing the risk of these adversaries becoming experts at wielding these AI/ML tool sets, which were designed for the good guys to use.

When combined with AI, ML provides automation platforms for exploit kits and, essentially, we're fast approaching the industrialization of automated intelligence to break down cyber defenses that were constructed with AI and ML.

Many of these successful exploit kits enable a new level of automation that makes attackers more intelligent, efficient, and dangerous. DevOps and many IT groups are using AI and ML for gaining insights into their operations, and attackers are following suit.

Injecting Corrupted Data
As researchers point out, attackers will learn how the enterprise defends itself with ML, then inject the unique computational algorithms and statistical models used by the enterprises with corrupt data to throw off their defensive machine learning models. Ingested data is the key to the puzzle that enables ML to unlock the AI knowledge.

Many ML models in cybersecurity solutions, especially deep learning models, are considered to be black boxes in the industry. They can use over 100,000+ feature inputs to make their determinations and detect the patterns of knowledge to solve a problem, such as the detection of anomalous cyber exploit behaviors in an organization or network.  

From the point of view of the security team, this can require trust in a model or algorithm within the black box that they don't understand, and coupled with the level of trust required, this prompts the question: Can "math" really catch the bad actors?

Data Poisoning
One improvement on the horizon is the ability to enable teams in the security operations center to understand how ML models reach their conclusions rather than having to flat-out trust that the algorithms are doing their jobs. So, when the model says there is anomalous risky behavior, the software can explain the reasoning behind the math and how it came to that conclusion.

This is extremely important when it's difficult to detect if adversaries have injected bad data — or "poisoned" it — into defensive enterprise security tools to retrain the models away from their attack vectors. Adversaries can create a baseline behavioral paradigm by poisoning the ML model data, so their adversarial behaviors artificially attain a low risk score within the enterprise and are allowed to continue their ingress.

What the Future Holds
For other intents — influencing voters, for example — bad actors run ML against Twitter feeds to spot patterns of influence that politicians are using to influence specific groups of voters. Once their ML algorithms find these campaigns and identify their patterns, they can create their own counter-campaigns to manipulate opinion or poison a positive campaign that is being pushed by a political group.

Then, there is the threat of botnets. Mirai was the first to cause widespread havoc, and now there are variants that use new attack vectors to create the zombie hordes of Internet of Things devices. There are even more complex industrial IoT attacks focused on taking down nuclear facilities or even whole smart cities. Researchers have studied how potential advanced botnets can take down water systems and power grids.

The use of AI and ML is off-the-shelf and available to midlevel engineers who no longer need to be data scientists in order to master it. The one thing that keeps this from being a perfect technology for the good actors or the bad actors is how to operationalize machine learning to greatly reduce false positives and false negatives. 

That is what new "cognitive" technologies are aspiring to become — more than the sum of their AI and ML parts — by not just detecting patterns of bad behavior in big data with complete accuracy, but also justifying recommendations about how to deal with them by providing context for the decision-making.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Satish Abburi is the Founder of Elysium Analytics, the cognitive SIEM (security information and event management) company, incubated at System Soft Technologies, where he also leads the Big Data Solutions practice. Prior to this, Satish was Vice President of Engineering at ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Commentary
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
Edge-DRsplash-11-edge-ask-the-experts
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
News
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Google's new See No Evil policy......
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-21257
PUBLISHED: 2021-06-18
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. The RPL-Classic and RPL-Lite implementations in the Contiki-NG operating system versions prior to 4.6 do not validate the address pointer in the RPL source routing header This makes it possible for an attac...
CVE-2021-21279
PUBLISHED: 2021-06-18
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. In verions prior to 4.6, an attacker can perform a denial-of-service attack by triggering an infinite loop in the processing of IPv6 neighbor solicitation (NS) messages. This type of attack can effectively ...
CVE-2021-21280
PUBLISHED: 2021-06-18
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. It is possible to cause an out-of-bounds write in versions of Contiki-NG prior to 4.6 when transmitting a 6LoWPAN packet with a chain of extension headers. Unfortunately, the written header is not checked t...
CVE-2021-21281
PUBLISHED: 2021-06-18
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. A buffer overflow vulnerability exists in Contiki-NG versions prior to 4.6. After establishing a TCP socket using the tcp-socket library, it is possible for the remote end to send a packet with a data offse...
CVE-2021-21410
PUBLISHED: 2021-06-18
Contiki-NG is an open-source, cross-platform operating system for Next-Generation IoT devices. An out-of-bounds read can be triggered by 6LoWPAN packets sent to devices running Contiki-NG 4.6 and prior. The IPv6 header decompression function (<code>uncompress_hdr_iphc</code>) does not pe...