Trusted execution environments are said to provide a hardware-protected enclave that runs software and cannot be accessed externally, but recent developments show they fall far short.

Yehuda Lindell, Chief Scientist at Unbound Tech and Professor of Computer Science at Bar-Ilan University

September 12, 2018

6 Min Read

One of the primary challenges in today's computing environments is that of trust. How do I know that the software that I am actually running is the correct software that I want to run? How do I protect the privacy of my data while it's in memory? How can I even think of solving these problems when I'm running my applications on remote machines or in the cloud? These acute challenges and more can be addressed by running software in trusted execution environments such as SGX, TrustZone, and more.

Such environments are said to provide a hardware-protected enclave that runs the software and cannot be accessed externally. Data can be encrypted in memory and then only decrypted inside the enclave, where it remains safe. This prevents malware, service providers, and any unauthorized entity from accessing private data. Furthermore, cryptographically enforced attestation can be used to identify and ensure that authentic software is running in the enclave, even remotely and in a cloud. As such, trusted execution environments can significantly boost the security of the modern computing environment.

Unfortunately, recent developments have shown that existing trusted execution environments fail to meet their promise. The reason is something called side channels. A side channel is an unexpected (and unintended) channel through which private information is leaked. It has been known for two decades that seemingly benign information such as the time it takes to compute a function, the power that a device uses during computation, and even the noise emitted by a computer's fan can all leak secrets. More recently, it was discovered that two isolated software applications running on the same physical machine can leak information to each other due to joint resources provided by the hardware.

A prime example of this type of software leakage is called a cache side-channel attack. Because main memory is very slow (relative to the speed of modern processors), memory caches are used to speed up memory access. These caches are shared by different applications on the same machine, meaning that the time that it takes one application to read from memory is influenced by another application's behavior. For example, if an application has just read a certain instruction from memory, then it will reside in cache. If another application reads the same instruction, therefore, it will retrieve it much faster than if the first application had not read that instruction.

Amazingly, this can be used by the latter application to know what the first application is doing. These attacks are so effective that they have been used to extract cryptographic keys of all types, without utilizing any operating system or software vulnerability; the entire attack is launched by issuing special instructions and measuring response times.

This type of attack is devastating because it means that virtual machines in the cloud can be attacked by other virtual machines running on the same hardware, even when the isolation provided by the software layers are perfect. As a result, sensitive code — like the code that carries out cryptographic operations — is written carefully to not leak anything via the cache. This is achieved by ensuring that the memory access pattern of the code is independent of the secret key. Although this may be theoretically possible, in practice, it is extremely hard and the best code written by the best experts on the subject has been broken time and again over the past few years. In part, this is due to the complexity of the software layers (it isn't always clear what happens when high-level instructions are called) and to the complexity of the hardware.

New Vulnerabilities
This year saw the discovery of new powerful vulnerabilities in the form of speculative execution: Spectre, Meltdown, and now Foreshadow. These attacks utilize the fact that modern hardware chips will run instructions and only later check whether these should have been carried out. This enables the chip's pipeline to be better utilized, since often the chip's "guesses" as to what to compute are correct.

These sophisticated techniques, and others, have provided the world with the ever-increasing speed of computation that it was used to due to Moore's Law, and continues to demand even though the physical barriers of size are an obstacle. However, as this year's headlines have taught us, speculative execution also leaks and provides attackers with new effective side channels that can be used to read secrets. Foreshadow is especially devastating because even perfectly written software is vulnerable, and the entire attack is due to the way that the hardware processes memory.

To make all of the above even worse, trusted execution environments like SGX have been constructed so that they also share resources with other applications running on the same hardware. As a result, speculative execution attacks are effective on SGX, and Foreshadow demonstrated that encrypted memory can be easily dumped out of an SGX enclave. Foreshadow is even able to steal the special Intel enclave attestation key, completely breaking the integrity mechanism of SGX. (Note that although such attacks are very nontrivial to design, once an attacker has written the attack, it can be quite easily deployed on a large scale.)

Although local patches have been issued by Intel, the fundamental flaw of SGX (and other trusted execution environments) is that they are not isolated from other processes. I don't believe that it is possible to construct a truly secure trusted execution environment without full isolation; side channels are abundant and we are only just seeing the beginning. Indeed, instead of SGX being a powerful tool to provide strong and proven guarantees of security, it is part of the constant cycle of break, fix, and break again. This is indeed a sad time for the security of our digital world.

Trusted execution can only be trusted if the execution environment is truly isolated from the rest of the chip. Despite its attractive promise, SGX cannot be used today to effectively protect secrets. One can only hope that truly isolated trusted execution environments can be built in the coming years. Because this would require great cost and a complete redesign, it is sadly not something that we will see soon.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

About the Author(s)

Yehuda Lindell

Chief Scientist at Unbound Tech and Professor of Computer Science at Bar-Ilan University

Yehuda Lindell is the CEO and Co-Founder of Unbound Tech (previously, Dyadic Security) as well as professor in the Department of Computer Science at Bar-Ilan University. Prior to Bar-Ilan in 2004, he was a Raviv Postdoctoral fellow in the Cryptographic Research Group at the IBM Thomas J. Watson Research Center. He received his Ph.D. in 2002 from the Weizmann Institute of Science, under the supervision of Oded Goldreich and Moni Naor. He is the director of the Bar-Ilan Center for Research in Applied Cryptography and Cyber Security. Unbound Tech uses secure multiparty computation to protect cryptographic keys.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights