Sponsored By

AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security

Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation.

black mamba snake coiled up
Source: Christopher Wedd via Alamy Stock Photo

A proof-of-concept, artificial intelligence (AI)-driven cyberattack that changes its code on the fly can slip past the latest automated security-detection technology, demonstrating the potential for creating undetectable malware.

Researchers from HYAS Labs demonstrated the proof-of-concept attack, which they call BlackMamba, which exploits a large language model (LLM) — the technology on which ChatGPT is based — to synthesize a polymorphic keylogger functionality on the fly. The attack is "truly polymorphic" in that every time BlackMamba executes, it resynthesizes its keylogging capability, the researchers wrote.

The BlackMamba attack, outlined in a blog post, demonstrates how AI can allow the malware to dynamically modify benign code at runtime without any command-and-control (C2) infrastructure, allowing it to slip past current automated security systems that are attuned to look out for this type of behavior to detect attacks.

"Traditional security solutions like endpoint detection and response (EDR) leverage multi-layer, data intelligence systems to combat some of today’s most sophisticated threats, and most automated controls claim to prevent novel or irregular behavior patterns," the HYAS Labs researchers wrote. "But in practice, this is very rarely the case."

They tested the attack against an EDR system that was not identified specifically, but characterized as "industry leading," often resulting in zero alerts or detections.

Using its built-in keylogging ability, BlackMamba can collect sensitive information from a device, including usernames, passwords, and credit card numbers, the researchers said. Once this data is captured, the malware uses a common and trusted collaboration platform — Microsoft Teams — to send the collected data to a malicious Teams channel. From there, attackers can exploit the data in various nefarious ways, selling it on the Dark Web or using it for further attacks, the HYAS Labs researchers said.

"MS Teams is a legitimate communication and collaboration tool that is widely used by organizations, so malware authors can leverage it to bypass traditional security defenses, such as firewalls and intrusion detection systems," they wrote. "Also, since the data is sent over encrypted channels, it can be difficult to detect that the channel is being used for exfiltration."

Moreover, because BlackMamba's delivery system is based on an open source Python package, it allows developers to convert Python scripts into standalone executable files that can be run on various platforms, including Windows, macOS, and Linux, they wrote.

What This Means for Modern Security

AI-powered attacks like this will become more common now as threat actors create polymorphic malware that leverages ChatGPT and other sophisticated, data-intelligence systems based on LLM, according to the HYAS Labs researchers. This, in turn, will force automated security technology to evolve as well to manage and combat these threats.

“The threats posed by this new breed of malware are very real," the researchers wrote in the post. "By eliminating C2 communication and generating new, unique code at runtime, malware like BlackMamba is virtually undetectable by today's predictive security solutions."

Typically, organizations that deploy EDR and other automated security controls as part of a modern security stack believe they're doing everything in their power to detect and prevent malicious activity. However, BlackMamba's use of AI now demonstrates that "they are not foolproof," the HYAS Labs researchers noted.

"The BlackMamba proof-of-concept shows that LLMs can be exploited to synthesize polymorphic keylogger functionality on-the-fly, making it difficult for EDR to intervene," they wrote.

The security landscape will have to evolve alongside attackers' use of AI to keep up with the more sophisticated attacks that are on the horizon, according to the researchers. Until then, it's imperative that organizations "remain vigilant, keep their security measures up to date," they advised, "and adapt to new threats that emerge by operationalizing cutting-edge research being conducted in this space."

About the Author(s)

Elizabeth Montalbano, Contributing Writer

Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights