Malware writers go low-tech in their latest attempt to escape detection, waiting for human input -- a mouse click -- before running their code

Dark Reading Staff, Dark Reading

December 20, 2012

4 Min Read

The world suddenly changed for antivirus companies in 2004.

During the previous two years, malware writers had kept up a steady stream -- a trickle, in retrospect -- of viruses, Trojan horses, and worms. Security firm Symantec, for example, added approximately 20,000 new signatures each year to its Norton antivirus software to keep up with the malware flow. But in 2004, the number of new malware variants skyrocketed, forcing Symantec to add 75,000 signatures that year, 169,000 signatures in 2006, and 1.7 million signatures in 2008. This year, its software carries 21.5 million signatures, according to the firm's latest data.

Without automating the analysis of pernicious programs, antivirus firms would be overwhelmed.

But increasingly, malware writers are finding ways to attack antivirus firms' ability to cope with the flood of malicious binaries, and that could stress the entire system for triaging and analyzing malware.

"If malware can hide itself from automated threat analysis systems, it can blend in with millions of sample files and antivirus applications may not be able to figure out that it is malicious," Symantec researcher Hiroshi Shinotsuka said in an October blog post. "Therefore, both malware and packer program authors attempt to utilize techniques to hide malicious files from automated threat analysis systems."

The average antivirus firm sifts through hundreds of thousands of binaries every day to identify tens of thousands of new variants that may be targeting customers. And those estimates may be low: Symantec's systems had to wade through an ocean of 403 million unique variants of malware in 2011, a 41 percent increase over the previous year, the company said in its Internet Security Threat Report released in April.

Meanwhile, attackers have slowly escalated their methods. The first attempts merely aimed to overwhelm antivirus firms processes, using packers and obfuscation to turn a single piece of malware into multiple, dissimilar binaries. Then, attackers built their malware to look for signs that it was running inside an analysis environment, most commonly a virtual machine.

"We have seen threats that will check 50 different attributes of the environment before they start running," said Liam O Murchu, a researcher with Symantec's security response team.

More technical attacks have emerged, however. The Flashback Trojan, which infected both Windows and Mac OS X systems, locked itself to a system using an encryption algorithm based on a key derived from the specific system's properties. Gauss took the technique even further.

[Malicious programs continually evolved in 2012, whether using new technical approaches to infection, novel business models, or demonstrating the vulnerability of areas thought unrelated to cybersecurity. See Slide Show: Top 10 Malware Advances In 2012.]

A new Trojan has taken a much simpler approach. The program, known as UpClicker, looks for evidence that it's running on a human-controlled machine. Because automated malware analysis platforms typically run for only a few minutes with no human interaction, UpClicker waits for the left mouse button to be released. Only then does the malware run its main process.

In an analysis of the UpClicker Trojan, threat-protection firm FireEye found that the simple technique works well.

"They went for a very specific thing, where, unless the mouse button is released, it won't do anything," says Abhishek Singh, a senior malware research engineer with the company. "So it provides a bit of a challenge for the sandboxed systems."

When it infects the system, UpClicker takes an innocuous first step, binding itself to the mouse. The malware then hibernates until a user clicks the left mouse button and then releases it. Unless security researchers emulate the button press, automated analysis systems will stop observing the malware before its actually does any malicious activity, and the code will not be flagged for further investigation.

"When you are dealing with malware that requires a certain level of interaction, or a certain level of activity, or looking for things that exist on the host, or things that should not exist on the host, that's when these systems start having difficulty, and the playing field changes substantially," says Dean De Beer, chief technology officer with ThreatGRID, a provider of malware analysis services.

To fight against techniques designed to prevent malware from giving itself away to analysts, ThreatGRID's engineers will review any samples that did not run in their environment to ascertain why, he says. Many times the problem may be a bug in the system, but quite often the failure is due to malware designed to stymie analysis.

The arms race between malware authors and security firms will continue, Symantec's O Murchu says. The vast majority of malware developers are unlikely to code their malware to evade analysis, but the researcher does expect to see the increasing -- albeit slow -- adoption of the techniques.

"If there were a huge change in the techniques, we [the security firm] are all watching out for that, and we would all change our systems to deal with it," he says.

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights