It's not a new problem, of course, but I'm glad to see this issue being held up to the light. Kudos to reporter Kevin McLaughlin and our sister publication CRN for doing so.
Given how much we've all come to rely on security alerts, and how often IT organizations prioritize their daily workloads around them, it's a problem that's both broad and deep. And it's about time the industry as a whole started talking about this particular elephant in the living room.The essence of the problem, as I see it: Some suggest that vendors that make their living by selling security software or hardware should not also be in the business of telling customers how serious a particular issue is. After all, the thinking goes, it's only natural that these vendors stand to make money out of scaring the tar out of customers, so they're apt to make everything into a huge issue, even if the problem doesn't deserve to be.
The security vendors, in turn--okay, let's name names: primarily McAfee and Symantec--respond that since they're in the business of fixing software that "breaks" due to bugs and viruses, they're in a particularly good place to know how critical an issue is, especially when compared to other issues of a similar ilk. They both also employ analysts whose job it is to be unbiased researchers, answering questions including whether MS Blast is worse than Sober.k, Love Bug, or Sasser, and if dealing with the Microsoft MS06-040 vulnerability should take precedence.
I can see this point, but I believe it essentially boils down to an old IT issue. Companies that consult shouldn't also be in the business of selling hardware and software. This is the primary reason why Gartner, Forrester, IDC, and other research and consulting firms have been successful for a long time. Their business models are all about having no ostensible stake in the outcome--in other words, it shouldn't matter to the consultant if a customer winds up adopting X software or Y hardware to solve any given problem.
Now most of us old warhorses know this isn't necessarily always the case and that even supposedly unbiased advisers can have an agenda, whether it's to keep selling more services or just to have fun watching the sparks fly. It also doesn't help when those of us in the media jump all over every security warning, bulletin, and "survey." (News bulletin: "Security vendor says security problems are getting worse." Um, hello? What else would we expect them to say? Even if it's true, there's an appearance of a conflict of interest here, and all of us would be much better off by foreswearing 95% of these types of stories.)
As all this is happening, new security problems keep appearing, from AT&T's recent Web site hack to organizations' concerns about Web applications in general and service-oriented architectures in particular.
Okay, so we know what's wrong. How to fix it? Well, for starters we do have organizations including Secundia, the SANS Institute, and, despite its flaws, the CERT. These groups aren't being paid by any particular vendroid to say anything and are, as far as I can determine, as free of product hype as possible. Conspiracy theorists--and you know who you are--point out that all security-watch organizations, like all companies, have at their essence a need to keep on keepin' on--that is, to ensure their own survival by touting all security problems as Big Deals. After all, if people aren't scared, there would be no need for their services.
There's a grain of truth to that charge, but on the whole I think these kinds of groups are mostly providing a public service with the copious amounts of free information they make available. All of them--except of course for the CERT, which is funded by the U.S. government--have to survive by doing private consulting for both vendors and end-user organizations. They certainly can't live by means of the public listservs and other info they give away.
In other words, if all three of these groups start screaming "Danger, Will Robinson," I'd submit it's time to pay attention to the issue at hand. If they disagree, which the CRN story says they're doing more frequently these days, there's another way to think about it.
Ultimately, it's up to each of us to know our own environment and to keep the perpetual index finger in the air to know what's going on. If there's one lone researcher shrieking about something, well, there are only two choices--either s/he's correct and is far ahead of the rest of the security pack, or s/he's out on limb that's about to snap off the tree.
We all need to be smart, stay informed, and understand our systems well enough to figure out what needs fixing first, or risk being out of business. No vendor, well-intentioned or not, can make these decisions for us; we need to take this responsibility for ourselves. No one else knows what an acceptable level of risk may be for each of us. That's something IT and the business units need to work on and agree upon together.
So what do you think? Is this controversy about security alerts something to pay attention to, and if so how do you do that? Or is this a nonproblem of the industry's own making that a little or a lot of common sense will go a long way in resolving? Respond below.We all need to be smart, stay informed, and understand our systems well enough to figure out what needs fixing first, or risk being out of business. No vendor, well-intentioned or not, can make these decisions for us; we need to take this responsibility for ourselves.