When selecting targets, attackers often consider total cost of 'pwnership' -- the expected cost of an operation versus the likelihood of success. Defenders need to follow a similar strategy.

Paul Makowski, CTO, PolySwarm

April 2, 2019

6 Min Read

Recently, two in-the-wild exploits for zero-day vulnerabilities in Google Chrome and Microsoft Windows were disclosed by Google's TAG (Threat Analysis Group). The event made headlines, but it's not a new story: a zero-day vulnerability under active attack is discovered, vendors scramble to issue patches and companies scramble to deploy these patches. Substantial cost is incurred each step of the way. (For reference, Google patched the Chrome zero-day [CVE-2019-5786] and Microsoft patched the Windows 7 zero-day [CVE-2019-0808].)

The bad news is we've been doing this dance for decades. The good news is that some pockets of the industry have invested significant time and energy into reducing the impact and frequency of attacks that leverage zero-day vulnerabilities. In fact, Google and Microsoft are among the leaders in this space.

What can software companies that lack the security budget of tech titans learn from this latest event, and what can IT managers/CISOs/enterprise decision makers learn to inform product decisions? Here are four strategies to consider:

Software Developers: Adopt a Healthy Skepticism of Unsafe Languages
Unsafe languages (C, C++, Objective-C) are unsafe. There are various definitions of safe, of course, but the C language family doesn't qualify for any of them. Treat unsafety for what it is: cost, risk and liability. Consider Rust or Go among a range of alternatives.

On the surface, it may appear cheaper to develop in a language that's perhaps more familiar, but you need to consider the total cost of ownership (TCO) of that choice. If you want to use an unsafe language to build a safe product, you'll need to invest in exhaustive unit tests, handle any current and future undefined behavior, accommodate cross platform differences, and maintain a fuzzing suite — at a minimum. Like Google and Microsoft, you'll need processes in place for when all these things fail to identify an issue — and they will.

If you're starting a new project, there are very few compelling reasons to build in an unsafe language. Reasons typically given include:

  • Project performance is critical and even minimal overhead is unacceptable,

  • Platform does not support a safe language, and the

  • Project will never handle untrustworthy input.

The cost gap of the first two is closing daily and the last is undecidable. It's unlikely, for example, that the developers of Ghostscript anticipated untrustworthy input via thumbnail processes in Gnome.

If you're maintaining an existing unsafe-language project, consider piecemeal conversion of that codebase to a safe language. Mozilla has been doing that with Firefox, that is rewriting various unsafe language portions of the browser in Rust.

Enterprises: Don't Use Outdated (Even if Supported) Operating Systems
What does it cost to upgrade your enterprise from Windows 7 to Windows 10? This can be difficult to quantify. What is perhaps even more difficult to quantify is the value gained from a modern operating system — but both are equally important for calculating an accurate TCO.

The aforementioned zero-day exploit in Windows made use of a null pointer dereference vulnerability in win32k.sys. This class of vulnerability is not exploitable on Windows 8+ and Windows 10 provides additional controls to app developers that substantially reduce the attack surface of this module. Continuing to run an outdated operating system comes with security cost.

You can't patch your way to security, yet patches are typically all you get with an outdated operating system. Windows isn't alone; macOS and Linux deploy new and fix existing exploitation mitigations with each release. By using outdated software, you don't reap these benefits.

Software Engineers: Know Your Attack Surface
Zero-day vulnerabilities in widely deployed software are worth money — sometimes a substantial amount of money. Gone are the days when security researchers disclosed zero-day on infosec conference attendees. The meat of many presentations at today's offensive-oriented conferences is exploration of previously unknown, unexplored attack surfaces in commodity software.

You need to identify all the ways your product might interact with untrustworthy input — your attack surface. Intended use cases are the tip of the iceberg. Some of the best value you could get out of a third-party audit is to learn new ways of interacting with your software.

Software Developers: Identify, Track, and Sandbox Untrustworthy Input
Once you've mapped out your attack surface, track and contain the usage of that untrustworthy input — and consider sandboxing the code responsible for handling it. The Chromium developers (including Google) have done an excellent job of describing Chrome (Chromium's) sandbox design. Use this as inspiration: directly build on it (the code is very liberally licensed) or use Google's just-released Sandbox API. Adobe built on Chromium's sandbox almost a decade ago, reducing the impact of PDF parsing vulnerabilities in Adobe Reader. If in doubt, consult Chromium's Rule of 2.

Both: If You Must Run/Develop Unsafe-Language Code, Use Exploit Mitigations
If your software product is locked into an unsafe language, there is no excuse to not leverage exploit mitigations offered by your compiler and target operating system.

Valve's Steam (think iTunes for games) contained multiple vulnerabilities that attackers leveraged to install malware on players' machines. In a separate report, Steam was vulnerable to a classic stack-based buffer overflow in the way it parsed game information. Exploitation was made simple by lack of stack protection, a protection available in various forms for over a decade. Don't be like Valve.

Exploit mitigations are no substitute for choosing to develop in a safe programming language. If you must maintain or develop in an unsafe language, enable them as a matter of course, but do not rely on them as a reason to delay moving to a safe language. If you're responsible for deploying software, use the exploit mitigations available to you.

When selecting targets, attackers often consider total cost of "pwnership" — the expected cost of an operation versus the likelihood of success (times expected value) As a defender or a software engineer, conduct the same analysis — and consider the way your choices affect the security of software development and deployment.

Related Content:



 Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

About the Author(s)

Paul Makowski

CTO, PolySwarm

Paul Makowski's interests include exploitation, program analysis, vulnerability research, reverse engineering and cryptography.

Prior to co-founding PolySwarm, Paul reverse engineered implants and wrote bespoke malware disinfection tools for Fortune 100 clients. Paul authored many of the autonomous program analysis challenges in DARPA's Cyber Grand Challenge, researched partial homomorphic encryption as it applies to protecting programs and network signatures (DARPA CFT), and has co-designed a confidentiality system for a public/private hybrid blockchain for identity management (US DHS). Paul served at the National Security Agency (NSA) for two years as a Global Network Exploitation and Vulnerability Analyst (GNEVA). Paul has competed in and won DEF CON's CTF competition.

Paul holds a patent on detecting exploitation of memory corruption vulnerabilities using symbolic constraints and has two patents pending on XOM as a basis to defeat ASLR defeats as well as a system for establishing disjoint privilege domains in a single process space.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights