Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

02:30 PM
Satish Abburi
Satish Abburi

Hacker AI vs. Enterprise AI: A New Threat

Artificial intelligence and machine learning are being weaponized using the same logic and functionality that legitimate organizations use.

The adversarial use of artificial intelligence (AI) and machine learning (ML) in malicious ways by attackers may be embryonic, but the prospect is becoming real. It's evolutionary: AI and ML gradually have found their way out of the labs and deployed for security defenses, and now they're increasingly being weaponized to overcome these defenses by subverting the same logic and underlying functionality.

Hackers and CISOs alike have access to the power of these developments, some of which are turning into off-the-shelf offerings that are plug-and-play capabilities enabling hackers to get up and running quickly. It was only a matter of time before hackers started taking advantage of the flexibility of AI to find weaknesses as enterprises roll it out in their defensive strategies.

The intent of intelligence-based assaults remains the same as "regular" hacking. They could be politically motivated incursions, nation-state attacks, enterprise attacks to exfiltrate intellectual property, or financial services attacks to steal funds — the list is endless. AI and ML are normally considered a force for good. But in the hands of bad actors, they can wreak serious damage. Are we heading toward a future where bots will battle each other in cyberspace?

When Good Software Turns Bad
Automated penetration testing using ML is a few years old. Now, tools such as Deep Exploit can be used by adversaries to pen test their targeted organizations and find open holes in defenses in 20 to 30 seconds — it used to take hours. ML models speed the process by quickly ingesting data, analyzing it, and producing results that are optimized for the next stage of attack.

Cloud computing and access to powerful CPUs/GPUs are increasing the risk of these adversaries becoming experts at wielding these AI/ML tool sets, which were designed for the good guys to use.

When combined with AI, ML provides automation platforms for exploit kits and, essentially, we're fast approaching the industrialization of automated intelligence to break down cyber defenses that were constructed with AI and ML.

Many of these successful exploit kits enable a new level of automation that makes attackers more intelligent, efficient, and dangerous. DevOps and many IT groups are using AI and ML for gaining insights into their operations, and attackers are following suit.

Injecting Corrupted Data
As researchers point out, attackers will learn how the enterprise defends itself with ML, then inject the unique computational algorithms and statistical models used by the enterprises with corrupt data to throw off their defensive machine learning models. Ingested data is the key to the puzzle that enables ML to unlock the AI knowledge.

Many ML models in cybersecurity solutions, especially deep learning models, are considered to be black boxes in the industry. They can use over 100,000+ feature inputs to make their determinations and detect the patterns of knowledge to solve a problem, such as the detection of anomalous cyber exploit behaviors in an organization or network.  

From the point of view of the security team, this can require trust in a model or algorithm within the black box that they don't understand, and coupled with the level of trust required, this prompts the question: Can "math" really catch the bad actors?

Data Poisoning
One improvement on the horizon is the ability to enable teams in the security operations center to understand how ML models reach their conclusions rather than having to flat-out trust that the algorithms are doing their jobs. So, when the model says there is anomalous risky behavior, the software can explain the reasoning behind the math and how it came to that conclusion.

This is extremely important when it's difficult to detect if adversaries have injected bad data — or "poisoned" it — into defensive enterprise security tools to retrain the models away from their attack vectors. Adversaries can create a baseline behavioral paradigm by poisoning the ML model data, so their adversarial behaviors artificially attain a low risk score within the enterprise and are allowed to continue their ingress.

What the Future Holds
For other intents — influencing voters, for example — bad actors run ML against Twitter feeds to spot patterns of influence that politicians are using to influence specific groups of voters. Once their ML algorithms find these campaigns and identify their patterns, they can create their own counter-campaigns to manipulate opinion or poison a positive campaign that is being pushed by a political group.

Then, there is the threat of botnets. Mirai was the first to cause widespread havoc, and now there are variants that use new attack vectors to create the zombie hordes of Internet of Things devices. There are even more complex industrial IoT attacks focused on taking down nuclear facilities or even whole smart cities. Researchers have studied how potential advanced botnets can take down water systems and power grids.

The use of AI and ML is off-the-shelf and available to midlevel engineers who no longer need to be data scientists in order to master it. The one thing that keeps this from being a perfect technology for the good actors or the bad actors is how to operationalize machine learning to greatly reduce false positives and false negatives. 

That is what new "cognitive" technologies are aspiring to become — more than the sum of their AI and ML parts — by not just detecting patterns of bad behavior in big data with complete accuracy, but also justifying recommendations about how to deal with them by providing context for the decision-making.

Related Content:



Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Satish Abburi is the Founder of Elysium Analytics, the cognitive SIEM (security information and event management) company, incubated at System Soft Technologies, where he also leads the Big Data Solutions practice. Prior to this, Satish was Vice President of Engineering at ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/9/2020
Omdia Research Launches Page on Dark Reading
Tim Wilson, Editor in Chief, Dark Reading 7/9/2020
Mobile App Fraud Jumped in Q1 as Attackers Pivot from Browsers
Jai Vijayan, Contributing Writer,  7/10/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-07-10
Django Two-Factor Authentication before 1.12, stores the user's password in clear text in the user session (base64-encoded). The password is stored in the session when the user submits their username and password, and is removed once they complete authentication by entering a two-factor authenticati...
PUBLISHED: 2020-07-10
In Bareos Director less than or equal to 16.2.10, 17.2.9, 18.2.8, and 19.2.7, a heap overflow allows a malicious client to corrupt the director's memory via oversized digest strings sent during initialization of a verify job. Disabling verify jobs mitigates the problem. This issue is also patched in...
PUBLISHED: 2020-07-10
Bareos before version 19.2.8 and earlier allows a malicious client to communicate with the director without knowledge of the shared secret if the director allows client initiated connection and connects to the client itself. The malicious client can replay the Bareos director's cram-md5 challenge to...
PUBLISHED: 2020-07-10
osquery before version 4.4.0 enables a priviledge escalation vulnerability. If a Window system is configured with a PATH that contains a user-writable directory then a local user may write a zlib1.dll DLL, which osquery will attempt to load. Since osquery runs with elevated privileges this enables l...
PUBLISHED: 2020-07-10
An exploitable SQL injection vulnerability exists in the Admin Reports functionality of Glacies IceHRM v26.6.0.OS (Commit bb274de1751ffb9d09482fd2538f9950a94c510a) . A specially crafted HTTP request can cause SQL injection. An attacker can make an authenticated HTTP request to trigger this vulnerabi...