GenAI Writes Malicious Code to Spread AsyncRAT

Researchers have uncovered one of the first examples of threat actors using artificial intelligence chatbots for malware creation, in a phishing attack spreading the open source remote access Trojan.

4 Min Read
The letters "AI" in 3D rising from a black square on top of an internal computer component lit up in various colors
Source: Irene R via Shutterstock

Threat actors have used generative artificial intelligence (GenAI) to write malicious code in the wild to spread an open source remote access Trojan (RAT). It's one of the first observed examples of attackers weaponizing the chatbot technology for this purpose.

Researchers from HP Wolf Security have found evidence of the campaign, in which the attackers used GenAI to help them write VBScript and JavaScript code that was then used to distribute the AsyncRAT, an easily accessible, commercial malware that can be used for controlling a victim's computer.

The researchers first noticed the behavior when investigating a suspicious email in June. It had "an unusual French email attachment" posing as an invoice, HP Wolf Security revealed in its "Threat Insights Report" (PDF) for this month. The researchers ultimately discovered a campaign that was using both scripting types — code that was not, as it usually is, obfuscated — to spread AsyncRAT.

"The scripts' structure, comments, and choice of function names and variables were strong clues that the threat actor used GenAI to create the malware," according to the report.

It's widely believed that attackers already have used GenAI to help them write more convincing phishing emails, but so far there has been little evidence of the use of the technology to write malicious code, largely because legitimate chatbot tools have guardrails that prevent malicious use. However, security experts have known since the advent of the technology that it was only a matter of time before threat actors would find a way around those gates, and malicious chatbot development is a phenomenon on the Dark Web.

Related:Library of Congress Offers AI Legal Guidance to Researchers

The campaign demonstrates that attackers are quickly leveling up in their use of GenAI in a way that should put defenders on alert, the researchers noted. "The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints or malicious files before they even reach someone's inbox," according to the report.

Investigating a Malicious Email Campaign

Once the researchers discovered the disguised invoice, they dug deeper to find that the attachment was simply an HTML file which, when opened in the browser, asks for a password. At first they believed the threat to be an HTML-smuggling attack; however, it didn't behave the way other threats do in that the payload stored inside the HTML file was not encrypted inside an archive.

Instead, the file was encrypted within the JavaScript code itself, using the Advanced Encryption Standard (AES) and implementing it without making any mistakes. This meant that for researchers to decrypt the file, they needed the correct password.

Related:'White FAANG' Data Export Attack: A Gold Mine for PII Threats

Eventually, the research team brute-forced the correct password to the file and found that the decrypted archive contained a VBScript file that, when run, starts an infection chain that ultimately deploys the AsyncRAT. "The VBScript writes various variables to the Windows Registry, which are reused later in the chain," according to the report.

Part of that infection chain is the drop of a JavaScript file into the user directory that then reads a PowerShell script from the registry and injects it into a newly started PowerShell process. The PowerShell script then makes use of the other registry variables, and runs two more executables, which start the malware payload after injecting it into a legitimate process.

Unpacking GenAI-Generated Scripts

It was through a deeper analysis of both the VBScript and the JavaScript used in the infection chain that the researchers noticed that the code was not obfuscated, which seemed odd because code obfuscation is something attackers typically use to cover their tracks.

"In fact, the attacker had left comments throughout the code, describing what each line does — even for simple functions," according to the report. "Genuine code comments in malware are rare because attackers want to their make malware as difficult to understand as possible."

Related:'Bootkitty' First Bootloader to Take Aim at Linux

This behavior and the scripts' structure, consistent comments for each function, and the choice of function names and variables, made it reasonably clear that the attacker used GenAI to develop the scripts, according to HP Wolf Security.

Now that threat actors are starting to harness GenAI in their attack strategies, defenders also should integrate the technology into their security posture to fight fire with fire. Organizations can use GenAI to recognize patterns of threats to identify unauthorized access or malicious intent before attackers have a chance to infiltrate an environment. Indeed, the same efficiencies that GenAI create in an attack flow for malicious actors also can be leveraged by defenders to make their jobs easier, the security researchers said.

About the Author

Elizabeth Montalbano, Contributing Writer

Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights