Cybersecurity and morality might seem like two entirely different universes. Yet there's something distinctly moralistic in the narrative that surrounds the security industry. It's a narrative that pits good against evil as starkly as any horror flick or morality play — with an emphasis on the dark side.
The security industry is engaged almost exclusively in the pursuit of the bad thing — the bad actor, the malware, the worm that turns PCs into zombies — and punishing it. All too often the remedy is to kill the bad without enforcing the good. But what if there were a different approach to security — a way to automate doing the right thing? To bring our better angels into the security narrative?
Much of the security industry adheres to this stomp-out-the-bad model, with mixed results. And with so much bad to go around, it's no wonder the cybersecurity market is booming. By one estimate, the market will be worth over $230 billion by 2022, up from nearly $138 billion today. Yet the cost and number of breaches are increasing even faster than security spending. It's what led VMware CEO Pat Gelsinger to tell VMworld 2017 attendees that the security industry has failed its customers — that the prevailing security model is "broken."
In fact, most security breaches and system failures are the result of people not operating systems correctly. They forget to do something or give themselves permission to do an action, then leave that permission open so that bad actors can take advantage of it. These missteps could be avoided by a security approach that automatically directs, guides, or encourages system operators to do the right thing or blocks them from doing bad things. It is an enlightened security leader who prioritizes and budgets for this kind of security policy enforcement; without active and automated enforcement of policy, the breaches keep coming, costs keep rising, and heads keep rolling.
To draw an analogy from the parenting world, the dominant security model today is the equivalent of raising kids only by punishing them when they do bad. A more effective approach is to encourage kids when they do the right thing — thereby building a decision-making framework in their frontal cortex that will override bad behavior. Similarly, by automating good practices in the security world, the system can override bad behavior, which will lead to a safer environment.
At the risk of stating the obvious, this approach is not based on some naïve denial of the existence of the bad actor, the malware — the dark side. In fact, when recently asked what malware a policy enforcement approach would catch, I responded simply that it doesn't; rather, assume the malware is already present and trying to do something bad. Once that assumption is accepted, you have the opportunity to turn the security model on its head into something far more powerful and resilient to zero-day attacks.
Let's say you want to protect workloads you have running in the cloud. The cloud, of course, is one of the big drivers of the rapid increase in security spending — particularly the increased deployment of cloud-based business applications. It's also a rich source of dark-tinged security narratives, particularly as it pertains to workloads. That's because workloads today can span multiple cloud platforms and are vulnerable to security breaches as they move beyond the boundaries of the data center. In the words of Forrester analyst Andras Cser, manual management of cloud workloads is essentially a death wish. That's what not to do.
But what sort of security policy would constitute doing the right thing in this context? And how could one have a policy that scales? A security policy is simply what you decide a priori is the correct behavior. You might decide to protect workloads by automating the enforcement of security policies based on contextual understanding of the people, data, and infrastructure that access and support the workload, and consistently enforce this across any cloud.
For example, consider a workload that is running in a bank's cloud data center in Europe and the workload is migrated to a cloud data center outside the EU. The data in the workload was accessible by a bank admin before the move, but now, policy and regulatory mandates (geofencing requirements for data sovereignty or GDPR) no longer permit a third-party system admin to access an encryption key to look inside private workload data, even though the workload was successfully moved to the new location. To protect the data from prying eyes, the bank could institute a policy delineating "who can access" based on "where a workload is located." It's the right thing to do, can be automated, and is easily enforceable, without manual intervention.
That's one way to automate good security practices — and it will certainly give our better angels a stronger voice in the security narrative.
- Is Your Security Workflow Backwards?
- Improve Signal-to-Noise Ratio with 'Content Curation': 5 Steps
- 'Blocking and Tackling' in the New Age of Security