How To Detect Heartbleed MutationsThe nightmare of Heartbleed is not the chaos of fixing the bug. It's identifying hundreds, possibly thousands, of small mutations still hiding in the network.
By now, most of you must have started to implement or have already implemented the Heartbleed security patches on to your servers. Heartbleed, one of the biggest security exploits that the industry has experienced so far, exists in OpenSSL and affects the way TLS/DTLS Heartbeat packets are handled. This weakness allows the access to the memory contents of a connected client or server. This means that any web server using OpenSSL is a potential target -- roughly 66% of all websites.
The biggest nightmare for security managers and infrastructure vendors is not the chaos of trying to fix the vulnerability exposed by Heartbleed, but variations of the bug that are already in the network unbeknownst to them. The Heartbleed vulnerability will follow the classic pattern of mutation. There will be hundreds and possibly thousands of small but critically different variants, so patching the root attack will not stop the threat, and could lead to a false sense of security.
A security patch, by definition, closes a discrete set of security holes in a product. Attack mutations, may or may not fall within the scope of the patch. The point is you are never sure and will always play catch up. The key is to proactively and progressively test to defend against the Heatbleed mutations.
First step: Fuzzing analysis
The first step in testing for Heartbleed mutation is to perform a full fuzzing analysis of the server SSL stack. This will narrow down an infinite number of possibilities to a finite list of candidate breech points. In the case of the root vulnerability, the SSL client had the ability in the Heartbeat response to request greater than 64k bytes of data. A fuzzer would directly test this scenario and would find the vulnerability by using an OutofBounds method (which is automatically generated by the fuzzer). Once you identify the holes and patch them, you need to rerun fuzzing in conjunction with your specific mix of application protocols to measure if loading opens more holes, or patch mitigation lowers quality of experience.
The best approaches to gain assurance, is one-arm stateful SSL/TLS exhaustivel testing and with full realism. Two arm 'simulations' will not test your network's vulnerability to attack. SSL/TLS fuzzing will recursively test mutation holes exhaustively. True one-arm HTTPS with support for keys and certs, and stateful peering with your network allows you to test "what if" scenarios under deep realistic scale. Exhaustive testing treats the device under test (DUT) as a system which allows stately interaction with the system in a one-arm mode, providing the ability to peer into the protocol in question (i.e., SSL/TLS). In the case of fuzzing, it automatically walks thought the finite state machine of the protocol. No testing system is perfect, but one-arm exhaustive testing exposes the DUT to far more measured coverage and reduces the chance of exploitation.
Finding, fixing, and scanning for vulnerabilities needs to be an iterative process that cannot be left to vendors. IT teams can easily take control of their network security destiny with proactive and progressive testing.
Chris Chapman has more than 20 years of experience with multiprotocol and cloud networking technologies, with emphasis on systems root-cause analysis, quality of experience, performance, and scale realism testing for both IPv4 and IPv6. He has written and deployed ... View Full Bio