The Irony Of Preventing Security Failures
It used to be that we were judged by not suffering security incidents. But today everyone gets hit, so we are now judged by how we deal with a breach. But what if nothing happens because we stopped it? That may be the most dangerous option in the long term.
May 1, 2009
It used to be that we were judged by not suffering security incidents. But today everyone gets hit, so we are now judged by how we deal with a breach. But what if nothing happens because we stopped it? That may be the most dangerous option in the long term.Last month, we all experienced the Conficker worm hype. While the worm was real, it was misrepresented, with a lot of fear placed on the April 1 date, which turned out to be nothing. The Conficker worm is still a threat, however, but nothing happened and the world at large has dismissed it.
The obvious risk is that the security industry will be accused of crying wolf and not believed next time when something serious happens. The less obvious risk is that this will happen again a few years from now when people have long forgotten.
But what if April 1 was a real doomsday, and nothing happened because we were successful in stopping the worm?
I once heard the story about Howard Schmidt being hired by Fortune 500 companies to help them prepare for Y2K. Y2K came and went without incident. The boards then called him in and demanded an explanation about why they had spent so much money and then nothing happened. The answer lies within the question. What Schmidt could have done better was to manage their expectations.
When we as security professionals stop a threat, how can we prove it was real in the first place to justify our work? One example from my own personal history is Blackworm. It was widespread and dangerous. We worked hard to coordinate incident response globally -- in my opinion the most impressive global coordination up to that point. D-Day came and went, and while many were hit, the world did not come to an end. To this day, I am still called on about Blackworm not existing, and we are accused of inventing the whole thing. Luckily, CAIDA researched the worm, and I have that research handy.
A similar issue faces CISOs when they ask for a budget to handle a threat. Picture the following scenario: You showed losses of $100,000 in dealing with virus outbreaks, and you justified purchasing new antivirus software for your company, noting that the infection costs would be significantly reduced if not eliminated.
Then your organization suffers no further virus outbreaks, but now you can't justify to the board purchasing a new license or continuing the update service because there is no longer a loss caused by outbreaks. Treating security as a part of the business and justifying financially what the department does is a good idea, but the concept of a return on investment in security doesn't always fit perfectly.
There is no easy solution. The two factors are risk analysis, which is often limited by historical data, and human psychology (seeing is believing).
Gathering measurements of the successes and advancements of the corporate security program is important both for justifying costs and preventing disappearing funds.
Keeping management in the loop by presenting success stories and the challenges ahead, as well as a broader picture of what others face, is important. Limiting the surprise factor on spending, and showing management that you are business-oriented and working toward the same business ends, will make you more trustworthy and help you make your case for security.
You won't become obsolete by taking care of problems because there will always be new security threats. What we need to do better is show the business that we are a part of the solution, and that we are fiscally responsible, rather than financial burdens.
Follow Gadi Evron on Twitter: http://twitter.com/gadievron
Gadi Evron is an independent security strategist based in Israel. Special to Dark Reading.
About the Author
You May Also Like