Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

5/10/2018
02:30 PM
Adam Shostack
Adam Shostack
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Risky Business: Deconstructing Ray Ozzie's Encryption Backdoor

With the addition of secure enclaves, secure boot, and related features of "Clear," the only ones that will be able to test this code are Apple, well-resourced nations, and vendors who sell jailbreaks.

Recently, Ray Ozzie proposed a system for a backdoor into phones that he claims is secure. He's dangerously wrong.

According to Wired, the goal of Ozzie's design, dubbed "Clear," is to "give law enforcement access to encrypted data without significantly increasing security risks for the billions of people who use encrypted devices." His proposal increases risk to computer users everywhere, even without it being implemented. Were it implemented, it would add substantially more risk.

The reason it increases risk before implementation is because there are a limited number of people who perform deep crypto or systems analysis, and several of us are writing about Ozzie's bad idea rather than doing other, more useful work.

We'll set aside, for a moment, the cybersecurity talent gap and look at the actual plan. The way a phone stores your keys today looks something like the very simplified Figure 1. There's a secure enclave inside your phone. It talks to the kernel, providing it with key storage. Importantly, the dashed lines in this figure are trust boundaries, where the designers enforce security rules.

Source: Adam Shostack
Source: Adam Shostack

What the Clear proposal does is that when you generate a passphrase, an encrypted copy of that passphrase is stored in a new storage enclave. Let's call that the police enclave. The code that accepts passphrase changes gets a new set of modules to store that passphrase in the police enclave. (Figure 2 shows added components in orange.) Now, that code, already complex to manage Face ID, Touch ID, corporate rules about passphrases, and perhaps other things, has to add additional calls to a new module, and we have to do that securely. We'll come back to how risky that is. But right now, I want to mention a set of bugs, lock screen vulnerabilities, and issues, not to criticize Apple, but to point out that this code is hard to get right, even before we multiply the complexity involved.

There's also a new police user interface on the device, which allows you to enter a different passphrase of some form. The passcode validation functions also get more complex. There's a new lockdown module, which, realistically, is going to be accessible from your lock screen. If it's not accessible from the lock screen, then someone can remote wipe the phone. So, entering a passcode, shared with a few million police, secret police, border police, and bloggers around the globe will irrevocably brick your phone. I guess if that's a concern, you can just cooperatively unlock it.

Source: Adam Shostack
Source: Adam Shostack

Lastly, there's a new set of off-device communications that must be secured. These communications are complex, involving new network protocols with new communications partners, linked by design to be able to access some very sensitive keys, implemented in all-new code. It also involves setting up relationships with, likely, all 193 countries in the United Nations, possibly save the few where US companies can't operate. There are no rules against someone bringing an iPhone to one of those countries, and so cellphone manufacturers may need special dispensation to do business with each one.

Apple has been criticized for removing apps from the App Store in China, and that may be a precedent that Apple's "evaluate" routine may be different from country to country, making that code more complex.

The Trusted Computing Base
There's an important concept, now nearly 40 years old, of a trusted computing base, or TCB. The idea is that all code has bugs, and bugs in security systems lead to security consequences. A single duplicated line of code ("goto fail;") led to failures of SSL, and that was in highly reviewed code. Therefore, we should limit the size and complexity of the TCB to allow us to analyze, test, and audit it. Back in the day, the trusted kernels were on the order of thousands of lines of code, which was too large to audit effectively. My estimate is that the code to implement this proposal would add at least that much code, probably much more, to the TCB of a phone.

Worse, the addition of secure enclaves, secure boot, and related features make it hard to test this code. The only people who'll be able to do so are first, Apple; second, well-resourced nations; and third, vendors that sell jailbreaks. So, bugs are less likely to be found. Those that exist will live longer because no one who can audit the code (except Apple) will ever collaborate to get the bugs fixed. The code will be complex and require deep skill to analyze.

Bottom line: This proposal, like all backdoor proposals, involves a dramatically larger and more complex trusted computing base. Bugs in the TCB are security bugs, and it's already hard enough to get the code right. And this goes back to the unavoidable risk of proposals like these. Doing the threat modeling and security analysis is very, very hard. There's a small set of people who can do it. When adding to the existing workload of analyzing the Wi-Fi stack, the Bluetooth stack, the Web stack, the sandbox, the app store, each element we add distributes the analysis work over a larger volume of attack surface. The effect of the distribution is not linear. Each component to review involves ramp-up time, familiarization, and analysis, and so the more components we add, the worse the security of each will be.

This analysis, fundamentally, is independent of the crypto. There is no magic crypto implementation that works without code. Securing code is hard. Ignoring that is dangerous. Can we please stop?

Adam is a consultant, entrepreneur, technologist, author and game designer. He's a member of the BlackHat Review Board and helped create the CVE and many other things. He currently helps organizations improve their security via Shostack & Associates, and advises startups ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Stop Defending Everything
Kevin Kurzawa, Senior Information Security Auditor,  2/12/2020
Small Business Security: 5 Tips on How and Where to Start
Mike Puglia, Chief Strategy Officer at Kaseya,  2/13/2020
5 Common Errors That Allow Attackers to Go Undetected
Matt Middleton-Leal, General Manager and Chief Security Strategist, Netwrix,  2/12/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20477
PUBLISHED: 2020-02-19
PyYAML 5.1 through 5.1.2 has insufficient restrictions on the load and load_all functions because of a class deserialization issue, e.g., Popen is a class in the subprocess module. NOTE: this issue exists because of an incomplete fix for CVE-2017-18342.
CVE-2019-20478
PUBLISHED: 2020-02-19
In ruamel.yaml through 0.16.7, the load method allows remote code execution if the application calls this method with an untrusted argument. In other words, this issue affects developers who are unaware of the need to use methods such as safe_load in these use cases.
CVE-2011-2054
PUBLISHED: 2020-02-19
A vulnerability in the Cisco ASA that could allow a remote attacker to successfully authenticate using the Cisco AnyConnect VPN client if the Secondary Authentication type is LDAP and the password is left blank, providing the primary credentials are correct. The vulnerabilities is due to improper in...
CVE-2015-0749
PUBLISHED: 2020-02-19
A vulnerability in Cisco Unified Communications Manager could allow an unauthenticated, remote attacker to conduct a cross-site scripting (XSS) attack on the affected software. The vulnerabilities is due to improper input validation of certain parameters passed to the affected software. An attacker ...
CVE-2015-9543
PUBLISHED: 2020-02-19
An issue was discovered in OpenStack Nova before 18.2.4, 19.x before 19.1.0, and 20.x before 20.1.0. It can leak consoleauth tokens into log files. An attacker with read access to the service's logs may obtain tokens used for console access. All Nova setups using novncproxy are affected. This is rel...