The Fatal Flaw in Data Security
Simply stated: No matter how sophisticated your security software is, data cannot be simultaneously used and secured. But that may be changing soon.
Keeping data secure has been an ongoing quest for security professionals since we first began processing and storing large amounts of data. For me, that was during my time working in Israeli intelligence in the late 1990s. We had software deployed in insecure areas that we absolutely did not want exposed. But we didn't have a foolproof way to protect it.
The problem is no matter how sophisticated your security software is, there is one simple flaw that cannot be overcome: Data cannot be simultaneously used and secured. Data in memory cannot be encrypted and at the same time used by the CPU.
This flaw is devastating because it applies to all software, including all security software. Software encryption keys, for example, can't actually be hidden. When used, data in memory is revealed in plaintext, which leaves these "keys to the castle" exposed and vulnerable. Insiders, or bad actors who gain access, can simply dump the memory and search for the data and keys they need.
Once a bad actor or malicious insider gains credentials to access a system, it's easy for that system to be compromised. These breaches are occurring all the time; every major company has had a breach caused or started by an insider, including Facebook, Twitter, and Google. Last September, DoorDash confirmed a data breach through a third-party vendor that exposed the information of nearly 5 million customers, delivery workers, and merchants. The following month, account information on 7.5 million Adobe Creative Cloud users was exposed due to an unprotected online database.
I faced this situation at OpenDNS in 2015. Because our solution involved TLS termination, we held the private keys of customers. This was a huge risk for the company. We knew OpenDNS would not be able to recover if those customer keys were somehow lost or compromised. We spent large amounts of time and money trying to resolve this issue, but we could never completely do so.
Because of the seriousness of these risks, layers of encryption and security processes have been developed to mitigate the data-in-use security flaw. None have been successful. Because of the Catch-22 nature of protecting data in use, it's unlikely any ever will — without encountering an unacceptable performance hit.
Hardware Secures What Software Cannot
Academic and industry experts have long known a very good, practical solution to the secure data use conundrum is to create trusted execution and storage environments rooted in trusted hardware. Yan Michalevsky, my colleague in Israeli intelligence and co-founder at Anjuna, studied this in-depth in his recent Ph.D. work at Stanford. He studied how to improve performance and security of applications running in an enclave.
It turns out there are hardware-based solutions already being used successfully today. Cell phones and Apple laptops have these facilities so well integrated that users don't even know they're there. Until recently, there hasn't been a solution for the most prevalent server CPUs that would make this viable at the enterprise level.
Enclaves Deliver Hardware-Grade Security
That changed in 2015 when Intel introduced Software Guard Extensions (SGX) — a set of security-related machine-level instruction codes built into their new CPUs. AMD quickly followed with a similar proprietary instruction set for its SEV technology, available in the Epyc processor line. Key cloud providers, including Azure and IBM Cloud, have leveraged these processors within their own infrastructure, paving the way to create trusted execution environments for confidential computing in the cloud. AWS has announced its own solution called Nitro Enclaves.
In a secure enclave (also known as a "trusted execution environment"), applications run in an environment that is isolated from the host. Memory is completely isolated from anything else on the machine, including the operating system. Decryption now occurs on the fly — inside the CPU — which itself is authenticated through an attestation process to assure it is both genuine and secured.
Run Secure Anywhere
If enterprises can successfully adopt these capabilities, the implications will be significant. Enterprises will be able to maintain total data protection and control of their data, even in remote or physically insecure environments, such as the public cloud or when running in totalitarian countries. The fact that secure data can't be seen or used outside the enclave will allow a dramatic rationalization of layered security architectures. Because data can't be seen by infrastructure insiders, security processes protecting them can be virtually eliminated — increasing the productivity of cybersecurity teams and reducing liability risks.
However, we have not achieved Nirvana quite yet. Getting existing applications to work with the various enclave technologies is not simple. Each CPU manufacturer has developed its own proprietary instruction set. That means for chips from AMD, Intel, ARM, and Amazon Nitro, there will be at least four different enclave-enabling technologies and four different SDKs. Existing applications will need to be rewritten and recompiled four times — once for each architecture. It's not reasonable to expect enterprises to do this kind of development work, let alone make the kinds of changes to operations and processes that will also be required.
The Confidential Computing Consortium, a broad collection of cloud and software vendors, is working to close the gap that would enable businesses to build and leverage the data secure infrastructure they need to maximize security and minimize operational friction. The hardware foundation exists. The flaw is fixed. With a little innovation, we will soon see the advent of a new era of total data security. It's coming faster than you might think.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024