WannaCry Was an Avoidable Mess for NHS
A new report says that the UK's NHS could have avoided WannaCry entirely. Is it possible to secure a network from the ravages of bottom-line focused management?
November 1, 2017
How much security can technology provide? More to the point, how much insecurity can technology overcome? It's the kind of question that makes entrepreneurs dream and executives worry.
The great divide between technology and practice has been newly illuminated by a report on the National Audit Office's investigation of the WannaCry attack that hit the National Health Service. Here's the shocking news: It could have been stopped.
Now, anyone who followed WannaCry, how it spread and what it did, knows that millions of organizations around the world were not hit by the attack because of the operating systems they use or precautions they had taken. In the case of the NHS, it turns out that they were vulnerable and knew they were vulnerable at least a year before WannaCry.
The service had begun to respond, but the response was a variation on the old management favorite, "We'll fix it when we eventually replace the systems now using vulnerable software." Unfortunately for the NHS and at least a few thousand of its patients, WannaCry didn't wait for "eventually."
A decision to remediate by replacement, and wait until the normal refresh cycle plays out to replace, is a management decision, not a security team decision. And it is, let me emphasize, a decision based on money. Most security decisions are, at their core, motivated by money, but in this case it was a gamble that was lost in very public fashion.
Let's all admit something: If we knew -- really knew -- that every piece of hardware and software attached to the network was fully patched and up to date, and that every application was written to best practices standards for robust, secure behavior, then there is a great deal of funding currently sent to security spending that could go elsewhere. A non-trivial fraction of our total security spending goes to papering over holes that we are confident exist in our applications and infrastructure.
In talking with engineers, developers and security researchers it has become obvious that there is a great deal of development going on that is intended to, in blunt terms, protect us from ourselves. We want to be secure, but we don't want any "friction" in our transactions. We want our systems to be safe, but we don't want to take the time to test systems for security. We want to have secure applications, but we demand that they be updated hourly while hoping they'll be safe.
Throughout all of this we have to come face to face with the limits of technology. Security systems are becoming more capable, more flexible and faster, but they are constantly running up against failures in policy, process and human behavior. The day may come when scientists give enterprise networks at least a temporary advantage over the criminals who want to get in. I'm not convinced that we will ever have designs that keep us in front of short-sighted decision-making and ill-considered behavior for any meaningful length of time.
Related posts:
— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.
Read more about:
Security NowAbout the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024