Phishing Can Leverage Users To Bypass Sandboxes
Using social engineering to bypass traditional security defenses is not new and will certainly continue to grow.
Ransomware attacks have increased drastically in the past year, thanks in part to ongoing spam campaigns that have become much more creative. We are not talking about using non-executable files as attachments, which is now standard practice, but rather using a combination of social engineering and code to defeat sandboxes.
Of course, social engineering is at the heart of every phishing attack, enabling attackers to run their malware without having to resort to fancy exploits. This has brought back macros from the grave (or at least has made them more relevant) because removing the actual payload from the email attachment means that security scanners are less likely to flag them as malicious. Indeed, decoy Office documents such as fake invoices or contracts provide the perfect platform to add a degree of separation between the email client and the final payload. Unsuspecting users will infect themselves by allowing a malicious macro to download and run malware.
Malware sandboxes can handle such rogue file attachments quite well in an automated fashion with, for instance, a testing environment where Microsoft Office’s security preferences have been set to low or disabled altogether. Malware authors have also already come up with many countermeasures to ensure they are running code on real victims by checking for recently opened documents or processes belonging to virtual machines.
Building stealthy and “smart” sandboxes for malware analysis is feasible, and threat actors know that. One thing that is harder to emulate is specific user interactions that could be provided as instructions within the phishing email or document. Take for example password protected Word docs or having to double-click on part of a spreadsheet to get it to trigger. With more advanced and ever-changing scenarios like these, automated analysis reaches its limits.
Every second counts. The identification of new malware families and command-and-control servers could be delayed until the analyst either manually runs the sample or updates the sandbox to account for new evasion tricks. There seems to be an infinite number of tricks the bad guys could pull. Today they ask the victim to type in a specific password to infect themselves with a Word macro. Tomorrow they might ask them to solve a simple equation to do the same thing. Does that mean we need artificial intelligence added into our sandboxes?
To some degree we do, but there are other ways to proactively detect these threats no matter what user interaction is required. Using social engineering to bypass traditional security defenses is not new and will certainly continue to grow. Unfortunately, end users can easily be coerced to do things that will put them in danger. It’s simply a matter of how credible the phishing attack looks and what psychological factors come into play. However, if we stick to what we do know -- such as watching for certain behaviors that applications are trying to perform -- then we can preemptively stop attacks.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024