Zero Trust Is a Great Start; Zero Knowledge Can Be Even Better

For those of us in the decentralized space, the current zero-trust model doesn’t go far enough.

Ben Golub, Chairman and CEO, Storj

August 11, 2021

4 Min Read

Zero trust is one of the most important trends in security; searches on the phrase have climbed dramatically during the pandemic and the concept was even invoked in a recent executive order from President Biden on improving the nation’s cybersecurity

While traditional security models assume there is a perimeter (physical or logical) inside which individuals or devices can be trusted, the zero-trust model recognizes that no such perimeter exists in modern computing. Adopting zero-trust architectures is certainly a step in the right direction, but, for those of us in the decentralized space, we think the current zero-trust model doesn’t go far enough.

While terms like “trustless” or “zero knowledge” may sound similar to zero trust, zero knowledge embraces a philosophy and approach to designing systems that goes a step further. In essence, these systems assume that no device or entity acts altruistically and every device can fail or be compromised — either through malice or incompetence.

The decentralized view of the world holds that confidentiality, integrity, and availability of data is best ensured by designing a system in a manner such that no individual, device, or entity can bring it down, and that there is a sufficiently large number of independent individuals, devices, or entities that are motivated to act rationally against well designed incentives so that a large-scale compromise is almost impossible.

As a practical example, consider one of the most successful decentralized systems of all time: the Internet. As you are reading this article, or doing Zoom meetings, data is broken down into tiny packets of information, which are wrapped in a header that has a destination address in standardized form. These packets go from origin to destination crossing a large number of independently owned and operated bridges. You don’t know — or care — who operates those routers.

The Internet has delivered incredible availability compared to the centralized approaches to communication that were common before. The Internet is almost impossible to take down (i.e. highly available) because there is such heterogeneity in its structure, and so few centralized points of failure.

The architecture of the Internet is good for availability, but how can we guarantee confidentiality? We can't, so instead, we encrypt sensitive data prior to transporting it.

How can we extend this metaphor of decentralization/zero knowledge to the cloud?

Simple. Follow the same steps used in the design of the Internet. Build resiliency and security through redundancy, heterogeneity, and an understanding that systems that assume the random failure of components, and are designed to withstand those failures, can achieve much greater resilience.

It turns out that you can design confidential, secure, and high-integrity storage systems using decentralization principles. First, don’t run physical hard drives. Instead, recruit a network of individually operated nodes around the world to operate their hard drives, embracing a heterogeneity of devices, power sources, networks, geographies, operators, and security systems. In this way, you can build storage systems that are similar to the Internet’s foundation on a heterogeneous network of routers and bridges.

These decentralized cloud storage systems are designed to programmatically replace missing file pieces as they go offline. Using erasure codes, files are broken up into N number of pieces of which some smaller number (K) are needed to reconstruct the file. As long as N-K nodes don’t go out simultaneously, you won’t lose a file. Then, it is a matter of choosing the right N and K.

Second, encrypt data in transit and at rest, preferably with keys only known to the end users or their designees. If you assume that any node could be operated by a malicious or incompetent actor, make sure that it doesn’t matter if they are. In addition to encryption, the erasure coding scheme described above adds another level of protection. Since K pieces are required to construct a file, K nodes must be compromised to even get access to an encrypted file. Deny the attackers a central location to attack, and the incentive or ease of attack goes away.

Of course, in decentralized systems like blockchains, the whole point is that data integrity and confidentiality is ensured by requiring multiple, independent, heterogeneous actors to confirm (via hashing) the integrity of a distributed ledger. Just as bitcoin (with billions of dollars at stake) has not suffered distributed ledger integrity issues, the same principles can be applied to data integrity in distributed data systems more generally. If nodes need to establish, in a cryptographic way, that they are doing the right thing with the data that they possess, and if it takes an overwhelming percentage of actors to collude to compromise integrity, then integrity is preserved. Want to prevent crypto-viral ransomware attacks? Simple, make your encryption and access controls decentralized as well.

While zero trust is a great start, we can go much further. By taking the counter-intuitive step of designing systems where no individual device or actor can be trusted, you can actually increase the trustworthiness of the system as a whole.

About the Author(s)

Ben Golub

Chairman and CEO, Storj

Ben Golub is the executive chairman and CEO at Storj, an open source, decentralized cloud storage provider. Under Ben’s guidance, Storj has rolled out initiatives that deliver better privacy and security for developers and empower open source projects by enabling them to passively earn revenue every time their users store data in the cloud.

Ben also serves as an advisor at Mayfield, a global venture capital firm with over $2.7 billion under management. He was previously co-founder and CEO at Docker, the leader of the container and microservices movement and one of the fastest growing open source companies in history. Prior to Docker, Ben was cofounder and CEO of Gluster, an open source cloud storage platform that was acquired by Red Hat in 2011. Ben has a BA from Princeton and an MBA from Harvard.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights