Tool Automates Social Engineering In Man-In-The-Middle Attack
Researchers demonstrate attack that dupes victims in online chats
French researchers have developed an automated social engineering tool that uses a man-in-the middle attack and strikes up online conversations with potential victims.
The proof-of-concept HoneyBot poses convincingly as a real human in Internet Relay Chats (IRC) and instant messaging sessions. It lets an attacker glean personal and other valuable information from victims via these chats, or lure them into clicking on malicious links. And the researchers had plenty of success in their tests: They were able to get users to click onto malicious links sent via their chat messages 76 percent of the time.
More Security Insights
- Forrester Study: The Total Economic Impact of VMware View
- Securing Executives and Highly Sensitive Documents of Corporations Globally
The researchers who created the PoC -- Tobias Lauinger, Veikko Pankakoski, Davide Balzarotti, and Engin Kirda, all of Institut EURECOM in France -- are also working on taking their creation a step further to automate social engineering attacks on social networks.
"By automatically crawling and correlating the information users store in social networks, we are able to collect detailed personal information about each user, which we use for automated profiling," Kirda says. "Having access to such information would allow an attacker to launch sophisticated, targeted attacks or to improve the efficiency of spam campaigns."
The researchers originally wrote their so-called HoneyBot PoC tool as a way to demonstrate large-scale automated social engineering attacks. While spammers typically send IM messages that attempt to lure users to click on their malicious links, these attacks are often fairly conspicuous and obvious to the would-be victim. "We wanted to see if it would be possible to automate social engineering and how effective they would be in practice. Our aim was to warn against a new threat posed by sophisticated [automated social engineering] bots and raise awareness about such attacks in practice," Kirda says.
Such an attack could occur via an online shopping website or bank site that contains an embedded chat window, the researchers say. An attacker then could set up a phishing site and wage a man-in-the-middle attack on the chat window. "The attacker [then] can read all the data that is entered by the victims and modify it before it is sent to the authentic support," Lauinger says.
It could also be used to distribute malware by setting up a malicious Web page that infects the user's machine, for example.
The researchers demonstrated an attack that works like this: The bot registers as a regular user of a chat service and initiates an online conversation with a real user, "Alice." If Alice sends a message back to the bot, then the bot forwards her message to another legitimate user, "Bob," while eavesdropping and directing their conversation.
"Instead of using artificial intelligence or some other form of logic to generate an answer, the bot just forwards Alice's message to a second human user, Bob," Lauinger says.
Alice and Bob think they're talking to a real IRC user, but it's really the bot. "The messages sent to that nickname are ultimately answered by another human user. That other user isn't aware of the bot, either, because the attack works exactly in the same way for both human users that are involved in the attack."
The Python-based HoneyBot tool can automatically connect and disconnect from IRC channels and execute multiple attacks. It also speaks English, French, and Italian. The tool was first revealed publicly in April at the Usenix LEET symposium, where Lauinger presented the team's paper (PDF) -- and the researchers plan to detail their social networking enhancements in September at the Recent Advances in Intrusion Detection (RAID) 2010 Symposium in Ottawa.
The researchers also conducted a limited experiment with the tool on Facebook, mainly to prove it was possible. Lauinger says Facebook would be a more lucrative attack surface for a bad guy because of the large number of novice users and the wealth of private and sensitive data there. An attacker could build a phony profile and go from there: "If an attacker manages to clone two profiles and get on the friend list of the respective authentic user, it could forward messages between the fake and authentic profiles," he says. "If the real users chat with the fake profile instead of the real one, the attacker could spy on the messages that are exchanged and modify them, as in our social engineering attack."
Meanwhile, the researchers say they were surprised by how long the bot was able to successfully engage users. "We had the feeling that a man-in-the-middle bot attack would work well in practice. However, we did not think that we would be able to sustain the conversation between some users for several hours," Balzarotti says. "Also, we were surprised that many users clicked on links, although some IRC channels explicitly warned them against clicking on links."
Defending against an automated social engineering attack isn't easy: Social engineering, by nature, is all about human nature, and there's no patch for that. Heuristic detection can at least flag users of suspicious behavior, but slick attacker can find a way to evade it, the researchers say.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.