Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

02:30 PM
Satish Abburi
Satish Abburi

Hacker AI vs. Enterprise AI: A New Threat

Artificial intelligence and machine learning are being weaponized using the same logic and functionality that legitimate organizations use.

The adversarial use of artificial intelligence (AI) and machine learning (ML) in malicious ways by attackers may be embryonic, but the prospect is becoming real. It's evolutionary: AI and ML gradually have found their way out of the labs and deployed for security defenses, and now they're increasingly being weaponized to overcome these defenses by subverting the same logic and underlying functionality.

Hackers and CISOs alike have access to the power of these developments, some of which are turning into off-the-shelf offerings that are plug-and-play capabilities enabling hackers to get up and running quickly. It was only a matter of time before hackers started taking advantage of the flexibility of AI to find weaknesses as enterprises roll it out in their defensive strategies.

The intent of intelligence-based assaults remains the same as "regular" hacking. They could be politically motivated incursions, nation-state attacks, enterprise attacks to exfiltrate intellectual property, or financial services attacks to steal funds — the list is endless. AI and ML are normally considered a force for good. But in the hands of bad actors, they can wreak serious damage. Are we heading toward a future where bots will battle each other in cyberspace?

When Good Software Turns Bad
Automated penetration testing using ML is a few years old. Now, tools such as Deep Exploit can be used by adversaries to pen test their targeted organizations and find open holes in defenses in 20 to 30 seconds — it used to take hours. ML models speed the process by quickly ingesting data, analyzing it, and producing results that are optimized for the next stage of attack.

Cloud computing and access to powerful CPUs/GPUs are increasing the risk of these adversaries becoming experts at wielding these AI/ML tool sets, which were designed for the good guys to use.

When combined with AI, ML provides automation platforms for exploit kits and, essentially, we're fast approaching the industrialization of automated intelligence to break down cyber defenses that were constructed with AI and ML.

Many of these successful exploit kits enable a new level of automation that makes attackers more intelligent, efficient, and dangerous. DevOps and many IT groups are using AI and ML for gaining insights into their operations, and attackers are following suit.

Injecting Corrupted Data
As researchers point out, attackers will learn how the enterprise defends itself with ML, then inject the unique computational algorithms and statistical models used by the enterprises with corrupt data to throw off their defensive machine learning models. Ingested data is the key to the puzzle that enables ML to unlock the AI knowledge.

Many ML models in cybersecurity solutions, especially deep learning models, are considered to be black boxes in the industry. They can use over 100,000+ feature inputs to make their determinations and detect the patterns of knowledge to solve a problem, such as the detection of anomalous cyber exploit behaviors in an organization or network.  

From the point of view of the security team, this can require trust in a model or algorithm within the black box that they don't understand, and coupled with the level of trust required, this prompts the question: Can "math" really catch the bad actors?

Data Poisoning
One improvement on the horizon is the ability to enable teams in the security operations center to understand how ML models reach their conclusions rather than having to flat-out trust that the algorithms are doing their jobs. So, when the model says there is anomalous risky behavior, the software can explain the reasoning behind the math and how it came to that conclusion.

This is extremely important when it's difficult to detect if adversaries have injected bad data — or "poisoned" it — into defensive enterprise security tools to retrain the models away from their attack vectors. Adversaries can create a baseline behavioral paradigm by poisoning the ML model data, so their adversarial behaviors artificially attain a low risk score within the enterprise and are allowed to continue their ingress.

What the Future Holds
For other intents — influencing voters, for example — bad actors run ML against Twitter feeds to spot patterns of influence that politicians are using to influence specific groups of voters. Once their ML algorithms find these campaigns and identify their patterns, they can create their own counter-campaigns to manipulate opinion or poison a positive campaign that is being pushed by a political group.

Then, there is the threat of botnets. Mirai was the first to cause widespread havoc, and now there are variants that use new attack vectors to create the zombie hordes of Internet of Things devices. There are even more complex industrial IoT attacks focused on taking down nuclear facilities or even whole smart cities. Researchers have studied how potential advanced botnets can take down water systems and power grids.

The use of AI and ML is off-the-shelf and available to midlevel engineers who no longer need to be data scientists in order to master it. The one thing that keeps this from being a perfect technology for the good actors or the bad actors is how to operationalize machine learning to greatly reduce false positives and false negatives. 

That is what new "cognitive" technologies are aspiring to become — more than the sum of their AI and ML parts — by not just detecting patterns of bad behavior in big data with complete accuracy, but also justifying recommendations about how to deal with them by providing context for the decision-making.

Related Content:



Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Satish Abburi is the Founder of Elysium Analytics, the cognitive SIEM (security information and event management) company, incubated at System Soft Technologies, where he also leads the Big Data Solutions practice. Prior to this, Satish was Vice President of Engineering at ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
How SolarWinds Busted Up Our Assumptions About Code Signing
Dr. Jethro Beekman, Technical Director,  3/3/2021
'ObliqueRAT' Now Hides Behind Images on Compromised Websites
Jai Vijayan, Contributing Writer,  3/2/2021
Attackers Turn Struggling Software Projects Into Trojan Horses
Robert Lemos, Contributing Writer,  2/26/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: George has not accepted that the technology age has come to an end.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-03-08
The package github.com/pires/go-proxyproto before 0.5.0 are vulnerable to Denial of Service (DoS) via the parseVersion1() function. The reader in this package is a default bufio.Reader wrapping a net.Conn. It will read from the connection until it finds a newline. Since no limits are implemented in ...
PUBLISHED: 2021-03-07
An issue was discovered in MantisBT before 2.24.5. It associates a unique cookie string with each user. This string is not reset upon logout (i.e., the user session is still considered valid and active), allowing an attacker who somehow gained access to a user's cookie to login as them.
PUBLISHED: 2021-03-07
This affects all versions of package github.com/nats-io/nats-server/server. Untrusted accounts are able to crash the server using configs that represent a service export/import cycles. Disclaimer from the maintainers: Running a NATS service which is exposed to untrusted users presents a heightened r...
PUBLISHED: 2021-03-07
An issue was discovered in the Linux kernel through 5.11.3. drivers/scsi/scsi_transport_iscsi.c is adversely affected by the ability of an unprivileged user to craft Netlink messages.
PUBLISHED: 2021-03-07
An issue was discovered in the Linux kernel through 5.11.3. Certain iSCSI data structures do not have appropriate length constraints or checks, and can exceed the PAGE_SIZE value. An unprivileged user can send a Netlink message that is associated with iSCSI, and has a length up to the maximum length...