In a paper presented at the USENIX Security Conference this week, the three researchers -- Jiyong Jang, Maverick Woo, and David Brumley of Carnegie Mellon University -- used software features to graph the relationships between program code and track the evolution of benign and malicious software programs. The researchers created both a program, known as iLINE, to construct trees and graphs of related program versions, and a system, known as iEVAL, to evaluate the correctness of their family trees.
The researchers found that even basic measures -- such as file size and similarities among code snippets -- can be used to organize software by its lineage, from a root program to subsequent versions. Using file and section size for benign programs yielded about 95 percent accurate family trees, while using similarities in small sections of code yielded up to 96 percent.
"In the beginning, we thought constructing a lineage would be impossible," says Woo, a system scientist at Carnegie Mellon's CyLab. "But after working on it, we found that even simple techniques worked quite well."
Classifying malware into families is a necessary step for security-software firms to reduce the workload on their analysts. By creating a single signature or pattern that matches every malware variant, antivirus firms can create more efficient software and reduce their workloads.
Yet after classifying malware into a group, the next step is to determine which program should represent the family of malware, says Jiyong Jang, a co-author of the paper who just completed his PhD in electrical and computer engineering at CMU. The parent of all the programs, known as the root, is a good choice, he says.
"If you have constructed a lineage, you can determine the root and study that program," Jang says.
A major attraction of tracing the history of a malware family tree is the hope that it might lead to the root -- the developer who created the malware, says Jason Lewis, chief scientist with Lookingglass Cyber Solutions, a threat-intelligence firm
"I think the usefulness here is trying to determine who is writing the code," he says, adding that attributing code to a specific group of actors -- even if their identities are unknown -- can still be useful. "Is this someone who is interested just in stealing money and bank account information, or is this someone who is more of a state-level actor?"
[Malware writers go low-tech in their latest attempt to escape detection, waiting for human input -- a mouse click -- before running their code. See Automated Malware Analysis Under Attack.]
Attribution is typically a labor-intensive manual process. Using the automated techniques shown off by the CMU, researchers could allow malware analysts to look for complex code structures or algorithms that indicate a developer with more technical chops than the average online bank thief, Lewis says.
Yet any such analysis has to be careful not to draw too many conclusions from tenuous relationships, says Joe Stewart, director of malware research for Dell Secureworks, a managed security service provider. Two similar sets of code may not indicate a single author, but could mean that two developers collaborated, one author copied code from another, or that the two projects both used a third common library.
The researchers noted other pitfalls as well. Incorrectly guessing the root program, for example, can nearly halve the accuracy of the resulting family tree. And while the size of each section of code was a good way to match up malicious code, file size was not.
In the end, the techniques could prove useful if used to highlight interesting similarities between code so that analysts can follow up on them, Stewart says.
"You have to be careful what you infer out of [these analyses] ... but it could lead you to connections that you might not see otherwise," Stewart says. "Having something that automates the matching of software features is a good piece of the [analysis] puzzle."
Woo and Jang aim to improve their system and identify more interesting features that can be automatically extracted from program binaries, at runtime and from source code. While current methods with basic features does quite well, the aim is to create a more robust method of inferring the relationships between code and exceed 95 percent accuracy.
"We are all competing on the last 5 percent, so we need to construct more meaningful features," Woo says.
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.