30-Second HTTPS Traffic Attack: No Fix
Researchers who discovered BREACH vulnerability promise a tool to see if your site is at risk -- but say there's no easy fix.
The Syrian Electronic Army: 9 Things We Know
(click image for larger view)
The Syrian Electronic Army: 9 Things We Know
No fix is available for an attack that can recover plain-text information from encrypted HTTPS traffic in 30 seconds or less.
The BREACH attack -- short for Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext -- was discovered by Salesforce.com lead product security engineer Angelo Prado, Square application security engineer Neal Harris, and Salesforce.com lead security engineer Yoel Gluck. They first presented their findings in full at last week's Black Hat information security conference in Las Vegas. According to the researchers, all versions of the transport layer security (TLS) and secure sockets layer (SSL) protocols are vulnerable to the attack, but not every HTTPS-using site is necessarily at risk.
How can website operators identify if their sites are at risk? In general, the researchers said, vulnerable sites, Web applications and pages will use an HTTP response body -- referring to the set of rules that control the content in an HTTP response -- that employs HTTP compression. Vulnerable sites also will use query string parameters (POST) to reflect user data. Finally, the website must be serving sensitive data -- email addresses, security credentials -- to make it attractive to a would-be attacker.
[ Are the social network moguls right? Read Online Privacy: We Just Don't Care. ]
Prado and his fellow researchers promised to release a tool to allow businesses to test their own sites using proof-of-concept BREACH exploit code. "I am in the process of cleaning up the code and hope to publish it within a week, hopefully by Sunday. It would be a standalone tool that you can run locally (currently in .NET) and target our PoC site," said Prado via email, referring to a proof-of-concept site. "Then you could just adjust the targets and hopefully point against [your] own sites."
To be clear, the tool can't be used to scan the Web at large and find vulnerable sites. "The tool is not a scanner, you'd actually have to identify a vulnerable endpoint first, this requires a human," Prado said. "In the meantime the 'Am I Affected' section of breachattack.com should be a good start for manual testing with a tool such as Fiddler," which is a free debugging tool.
What happens if a site might be vulnerable? "Unfortunately, we are unaware of a clean, effective, practical solution to the problem," said the researchers on their breachattack.com site. "Some of these mitigations are more practical and a single change can cover entire apps, while others are page specific." They added: "Whichever mitigation you choose, it is strongly recommended you also monitor your traffic to detect attempted attacks."
The most effective technique for mitigating the vulnerability is to disable HTTP compression, which is used to make the best use of bandwidth and server processing capabilities for a faster browsing experience. Compression involves replacing duplicate series of bytes with a pointer to the original string, and shortening commonly used symbols. But employing compression means that HTTPS traffic is vulnerable to a padding oracle attack, in which an attacker -- who's able to eavesdrop on HTTPS communications -- can look at the size of the packets being transmitted, and by sending HTTPS requests, ultimately deduce the information being transmitted.
"In practice, we have been able to recover CSRF tokens with fewer than 4,000 requests," Prado said, referring to session tokens. "A browser like Google Chrome or Internet Explorer is able to issue this number of requests in under 30 seconds, including callbacks to the attacker command and control center."
Despite that threat, disabling HTTP compression isn't typically feasible, because compression makes possible the Web server performance and page-response times that site administrators and users have come to expect, according to Ars Technica.
Other less-effective mitigation techniques suggested by the researchers include "separating secrets from user input" -- which would likely involve redesigning website server software – or masking secrets by making them random. Other techniques include adding a random number of bytes to HTTP response messages to hide their true length, and rate-limiting HTTPS requests.
But many of the potential fixes carry their own baggage, and don't actually fix the underlying HTTPS problem. Or as noted by the "BREACH vulnerability in compressed HTTPS" advisory released last week by the Department of Homeland Security: "We are currently unaware of a practical solution to this problem."
About the Author
You May Also Like