'ROPEMAKER:' Behind the Scenes of an Exploit Disclosure

How 'social responsibility' and 'false security' played into the unmasking of a recently disclosed email vulnerability.

Matthew Gardiner, Senior Product Marketing Manager, Mimecast

September 11, 2017

3 Min Read

Threat research reminds me of a well-known saying describing the experience of an army at war: "boredom punctuated by moments of terror." That's a bit of hyperbole, for sure, when related to IT security. Threat research is rarely boring, but most of the time it isn’t incredibly exciting either. It is generally focused on the day-to-day grinding out of incremental discoveries to keep security defenses current.

I also don’t think in the everyday world of threat research we often reach the level of "terror," thankfully!  But threat research can get heated and be exciting at times, particularly when preparing to publish a significant discovery of a new vulnerability or exploit.

Recently the threat research team at Mimecast disclosed an email exploit named ROPEMAKER.  After the initial testing work to discover and confirm ROPEMAKER'S multiple exploit techniques (which took months), Mimecast went through the rigorous and lengthy process generally referred to as responsible disclosure. I will let you read the entire Wikipedia entry defining responsible disclosure, but two of the key terms in this definition that hit home for me are: social responsibility and false security

There are multiple conundrums associated with publicly disclosing new exploits or application vulnerabilities. Not the least of which are:

  • Who should you tell?

  • When should you tell them?

  • How do you tell them?

  • How do you know if you have told the right people?

  • How do you take competitive pressures out of the picture?

  • What do you do if you don’t get the hoped-for responses?

  • When should you go public with your discovery?

In the case of ROPEMAKER the disclosure process was made particularly difficult because it isn’t clear if the exploit takes advantage of an email application vulnerability, the abuse or misuse of an otherwise properly working application (HTML-based email), or if it is a systemic design flaw in associated Internet standards that provides an exploitable system to malicious actors. It also could be a bit of all three. 

Of course, an exploit that doesn’t clearly fall into one area leaves open the possibility that no one takes ownership of the issue or even recognizes it as an issue. I firmly believe it is part of Mimecast’s social responsibility to disclose it anyway, after going through a reasonable responsible disclosure process, because a lack of disclosure won’t make the security issue go away. Also, more eyes on the issue can lead to a better resolution.

What I'm leading to is the ultimate conundrum of public exploit and vulnerability disclosure: what if the attackers, as they often do, move faster than the defenders, and start taking advantage of newly disclosed research for their malicious purposes?  We know this is possible, even likely.  One must look no further than recent exploits, such as Wannacry, in which attackers took advantage of known vulnerabilities that had patches available for months, but that were not applied in time by many organizations. This led to a widespread spate of ransomware infections.

This is where the issue of false security comes to play. People and organizations are not safer when they lack knowledge about the insecurity of a given system they are using. In fact, they suffer from the state of false security, where their sense of security is based on ignorance, not on the true state of their risk.  Disclosure gives the good guys an opportunity to address the issue in both the short and longer term.

The bottom line is that those of us in the IT security world live in an interesting and complex reality. Overall, security defenses are not keeping pace with the expanding attack surface, and attackers are becoming increasingly industrialized and resourced. This is a toxic combination.  One hope is that threat research conducted by white hats can help tip the balance. But the disclosure of this potentially sensitive research must be done responsibly.

About the Author(s)

Matthew Gardiner

Senior Product Marketing Manager, Mimecast

Matthew Gardiner is a Senior Product Marketing Manager at Mimecast and is currently focused on email security, phishing, malware, and cloud security. With more than 15 years focused in security, Matthew's expertise in various roles includes threat detection & response, network monitoring, SIEM, endpoint threat detection, threat intelligence, identity & access management, Web access management, identity federation, cloud security, and IT compliance at RSA, Netegrity, and CA Technologies. Previously he was President and a member of the board of trustees of the security industry non-profit, the Kantara Initiative. Matthew has a BS in Electrical Engineering from the University of Pennsylvania and an SM in Management from MIT's Sloan School of Management.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights