Outgunned: How Security Tech Is Failing UsOur testing shows we're spending billions on defenses that are no match for the stealthy attacks being thrown at us today. What can be done?
Information security professionals face mounting threats, hoping some mix of technology, education, and hard work will keep their companies and organizations safe. But lately, the specter of failure is looming larger.
"Pay no attention to the exploit behind the curtain" is the message from product vendors as they roll out the next iteration of their all-powerful, dynamically updating, self-defending, threat-intelligent, risk-mitigating, compliance-ensuring, nth-generation security technologies. Just pony up the money and the manpower and you'll be safe from what goes bump in the night.
Thing is, the pitch is less believable these days, and the atmosphere is becoming downright hostile.
We face more and larger breaches, increased costs, more advanced adversaries, and a growing number of public control failures. Regulation and litigation have both increased. We're still struggling with the expensive PCI initiative, an effort as controversial as its efficacy is questionable--U.S. businesses continue to hemorrhage credit card numbers and personally identifiable information. The tab for the Heartland Payment Systems breach, which compromised 130 million card numbers, is reportedly at $144 million and counting. The Stuxnet worm, a cunning and highly targeted piece of cyberweaponry, just left a trail of tens of thousands of infected PCs. Earlier this month, the FBI announced the arrest of individuals who used the Zeus Trojan to pilfer $70 million from U.S. banks. Zeus is in year three of its reign of terror, impervious to law enforcement, government agencies, and the sophisticated information security teams of the largest financial services firms on the planet.
"If you're being targeted like that, I hope to hell you have an infrastructure and information security strategy that goes far beyond just antivirus," says the IT director at a Fortune 500 pharmaceutical company.
Some do, some don't. But collectively, we've spent billions of dollars on security technologies, and we still can't curb these threats. Intruders trot through firewalls deployed to block them, while malware flourishes on systems that antivirus vendors pledge to immunize. Meantime, our identity management efforts guzzle funds faster than politicians before a crucial vote.
Most of the IT security vendors we interviewed for this article--and we spoke with many of them--admit that their products have flaws, are less than comprehensive, and certainly have room for improvement. But what many of them are not so forthright about is just how bad the situation is. For example, during our own tests of antivirus system effectiveness, bypassing every one of the five major AV suites we had in our lab was a trivial matter. (Our full report, at informationweek.com/analytics/ outgunned, contains a rundown of our AV effectiveness testing.)
The situation is untenable for IT security teams. As one respondent to our InformationWeek Analytics Security Toolbox Survey put it, "Reputable vendors don't explicitly lie, but they do lie by omission."
What happened? Have we been purposely misled? Have we not spent enough money? Are we spending in the wrong places? Are our expectations too high, or is the technology too broken? Or are we just outgunned?
The scary answer is: All of the above. Recent events suggest that we are at a tipping point, and the need to reassess and adapt has never been greater. That starts with facing some hard truths and a willingness to change the status quo.
How We Got Here
At the heart of all security technology is the concept of risk mitigation, realized by the introduction of a technology or "control" to reduce the likelihood of a negative event occurring. Early on, we recognized that if users of computing resources could impersonate one another, serious problems could ensue. We subsequently implemented passwords. When PC viruses became mainstream in the '80s, vendors introduced AV technology. Hashing helped counter password cracking efforts, firewalls reduced exposure to foreign networks, VPNs helped address confidentiality concerns--the beat goes on, and a multibillion-dollar industry was born.
For those serious about protecting their companies' assets, the challenge today is no longer being secure. The challenge is being secure enough. Risk management is the new strategy, and the perfect remains the enemy of the good. Deficiencies, even in our security technologies, are an unfortunate fact of life and one that the pragmatist begrudgingly accepts. But when does a deficiency become so great that a control becomes woefully ineffective? And what happens if that ineffectiveness goes undetected and unaddressed? Blind dependence on an ineffective control clearly equals a false sense of security. But could that false sense of security actually increase levels of risk?
The answer is yes, and sometime in the last few years a number of our key security technology controls crossed that threshold and ceased to be effective, yet as an industry we have yet to adjust. We're pouring billions of dollars--literally--into security products that are gaining us very little. We don't retire anything but rather pile on more layers, leading to increased complexity, expense, and exposure.
Yes, layers bring increased exposure--the overhead of managing so many disparate systems increases costs while decreasing the amount of time overburdened security staffs can spend on more strategic endeavors.
Walking into the CEO's office and saying that the products you've spent a small fortune on are effective only at stopping novices and for checking off compliance forms? That takes more intestinal fortitude than most can muster. But now, finally, undeniable evidence of security tech failures is starting to surface.
Earlier this year, Verizon's Business Risk team and the U.S. Secret Service issued a report on trends and observations from hundreds of real-world intrusions. One of the many useful pieces of data illustrated the increasing use of customized malware by attackers. According to the report, more than half (54%) of the malware used during investigated breaches was either modified or custom-built. Forensic specialists from companies like Forward Discovery, Mandiant, Neohapsis, and NetWitness have found similar levels of sophistication.
In many cases, these customizations render the basic technology controls relied on by most organizations--intrusion detection and prevention systems and antivirus products--effectively blind. This is a harsh fact that some IT groups have been slow to realize, and companies have suffered greatly as a result. It strikes at the core of expensive frontline defenses, and understanding its root cause is central to understanding the challenges ahead.
For starters, the traditional signature-based antivirus model depends on the fact that vendors have to be exposed to components of a piece of malware before they can create a detection mechanism for it.
That used to be OK, because the intention of early malware authors was to spread their wares as quickly as possible, creating pandemics. That allowed for easy capture and sampling based on sheer volume; there were tens of thousands of systems that would be exposed. But today's skilled attackers don't have that mass-destruction goal. They're launching surgical strikes, and they have the funding and know-how to write custom malware that will go undetected. They even run tests to ensure their creations can slide past our defenses. "Heuristics engines" were supposed to address this practice, but our testing has shown that doesn't solve the problem.
Second, obfuscation technology has allowed attackers to take known malware and simply run it through a packer, a process that is now as easy as zipping a file. Packing is akin to outfitting a jet fighter with a stealth body to allow it to fly under the radar, in this case AV engines. It's not a flawless process, but our testing indicates that packing does work.
"In most cases, we treat AV as a simple check box--either present or absent," says the security officer at a Fortune 500 financial services company. "We measure based on penetration and currentness, not effectiveness. But I had not considered the effects of packers on malware, so to me this is a 'wow!' moment."
Of course, you needn't be a computer forensics expert to observe that antivirus software's effectiveness has decreased over the years. What most organizations don't realize, however, is that the effectiveness of this control approaches 0% when a professional attacker or toolkit is involved.
Finally, as if those two dynamics weren't enough, vendor R&D teams are now playing a volume game they can't hope to win, or even stay competitive in. "Years ago when we started writing checks, we might have been tackling five to 10 a day," says Paul Wood, a senior analyst with Symantec Hosted Services. "It's now well over 10,000 a day and growing." According to McAfee's 2010 Q2 Threat Report, the company identified 10 million pieces of malware in the first half of this year and is tracking close to 45 million in its malware database.
The volume pressure is adversely impacting other security technology areas as well. Intrusion detection and intrusion prevention system R&D teams work around the clock to provide timely coverage for mainstream operating system and service vulnerabilities, yet the signatures they issue cover only a small subset of the vulnerabilities announced weekly. Like antivirus offerings, these products are also now blind to many Web application attacks, contributing to ongoing skepticism. "IDS/IPS systems, at least for us, have not been as helpful as we were hoping," says one respondent to our survey. "If any security technology we have were to fail, this would be the least missed."
Vulnerability assessment products are also behind the curve, as Greg Ose and Patrick Toomey, both Neohapsis application security consultants, found when they recently set out to measure the relative effectiveness of various vulnerability scanners. "It's a question frequently raised by our customers," Toomey says. "They know the tools aren't going to catch all of the problems, but can they count on them to catch, say, 80% of the bad ones?"
What Ose and Toomey discovered was far worse than even they had anticipated. Out of the 1,404 vulnerabilities accounted for by the Common Vulnerabilities and Exposures project during the sample period, there were only 371 signatures. In the best cases, the tools were in the 20% to 30% effectiveness range. "What was striking is that these were not defects in the product's engines or failed signatures," Toomey says. "These were straight-up omissions that the vendors didn't even claim to check. Real-world results using the full set of checks might actually turn up more gaps."
Web application scanners didn't fare much better, as the team discusses in a report for InformationWeek sister site DarkReading.com (see informationweek. com/analytics/appscanners). Toomey's observations are in line with those of security researcher Larry Suto, who earlier this year reported that Web application vulnerability scanners missed almost half (49%) of the vulnerabilities present during his tests. In its recent 2010 State of Software Security Report, application security provider Veracode presented "No single method of application security testing is adequate by itself" as one of its key findings.
The result? The modern attacker now has a checklist that looks something like this: "Have malware specifically packed and tested to thwart antivirus products? Check. Have an entry vector that will sail past the firewall and won't be detected or blocked by IDS/IPS? Check. How about the ability to tunnel through firewalls to smuggle data using proxy-aware, HTTP-compliant communication protocols? Check. Have encryption for that smuggled data to render data loss prevention (DLP) useless? Check. Got keyboard loggers to home in on the IT staff, steal their credentials, and eventually masquerade as them? Absolutely."
Get Back On Track
So should we just give up on the last 10 years and find new careers? While tossing vendors out the window, reclaiming a large percentage of our CPU cycles, eliminating those managed security service providers from our budgets, and saving the 22% product maintenance fees might sound appealing, a wholesale dismantling of the existing security setup would just make the problems worse because it would open too many doors.
Fortunately, despite the grim state of affairs, there are four steps we can take now to start moving the odds closer to our favor.
1| Start spending money on controls that are more in line with threats.
Again, we aren't saying that firewalls, endpoint protection, antivirus, and identity management integration projects don't have their places. But should they consume the vast majority of our security technology spending, or might we want to invest more in controls that are closer to the assets being targeted, or in technologies that have a better chance of being effective?
For example, the 2010 Verizon data breach report places databases as the top type of compromised asset by both the number of breaches and the number of records stolen. Yet our investments in protecting database systems is minimal, at best. When it comes to database security, IT professionals also appear to be downright confused. In our 2010 InformationWeek Analytics State of Database Technology Survey, 70% of 755 respondents answered "yes" when asked if they are assessing the security of their databases. Yet 64% said they didn't know how that was being done. Huh?
Call us jaded, but with databases a top target and related security spending being relatively minimal, it's no wonder that in 2009, 92% of the record losses looked at by Verizon were related to them. We'd say database security auditing tools and database activity monitoring systems might be worth a bigger percentage of the budget, or at least an evaluation (see informationweek.com/ analytics/dbsec for more).
Other innovative techs are also worth a look. Data masking and tokenization technologies, like those from Voltage Security, can help address data encryption challenges with legacy systems and development environments. Damballa and FireEye focus on clever methods of detecting and eliminating botnets without depending on signature-based technology. While both companies' products are still young and feel more like features than platforms, most of the IT pros we've spoken with who use them are awakened to the pervasiveness of botnets on large networks.
Did we mention antivirus signatures aren't working?
Application whitelisting technologies from vendors such as Bit9, CoreTrace, McAfee (Solidcore), and SafenSoft are still maturing as well, but even in their current forms they offer alternative ways of getting a handle on endpoint protection by blocking or containing anything that is unknown--a functional inverse of the "try to track millions of bad things" model that is so clearly broken.
2| Adjust assumptions and put to rest some age-old debates.
An argument can be made that a less effective control is acceptable as long as you're aware of it and can insert a compensating control if necessary. For example, while the Neohapsis findings on network vulnerability scanners might be alarming to some, if you move from assuming 80% to 100% coverage to a more realistic 20% to 30% detection rate, there is now room to adjust the strategy. Perhaps the emphasis and primary investment are shifted to the patch management process, with vulnerability scanning a secondary check. There is still value in having both, but the emphasis has been logically reoriented.
Understanding the changing contexts of both the threat landscape and the security technologies in use is also important. For example, unless you've been living under a rock, you're aware of the ongoing "insider vs. outsider threat" debate. While we encourage tracking of trends, the simple fact is that in 2010, both of these adversarial profiles exist for most organizations, and as security professionals we need to address both scenarios in our planning--period.
But there's also a new twist to consider: With an increased number of attackers targeting and hijacking the credentials of IT personnel, the outsider can become the insider, at least from the perspective of our technology controls. Forward-thinking companies will move now to address this scenario. Think about how you'll detect large, anomalous query spikes against key tables in sensitive databases. Ensure you can spot large-scale document downloads from file shares and internal document management systems. If a hijacked credential is used to log into a large number of machines during a short time frame, you should have the ability to spot that activity. A U.S.-based employee account ID logging in from Asia? Perhaps worth a look.
Many of these scenarios can be addressed with existing tools, such as security information and event management (SIEM) platforms. But investing even a small percentage of your security budget in only a few specialized systems to help here will go further than throwing good money at yesterday's outdated controls.
Similar to the "insider vs. outsider," debate, the "stopping stupid vs. stopping evil" distinction is an important one to understand. The key is that the relation between the two may be closer than we realize. In many cases, stupid is the precursor to evil, and the greater number of occurrences of stupid, the higher the potential for evil to later occur. In the case of DLP implementations, for example, the technology might not always stop evil, but it definitely helps identify and impede quite a bit of stupid.
Context, assumptions, and relative effectiveness matter.
3| Stop rewarding ineffectiveness and start rewarding innovation.
Maybe right now you're struggling with a scary realization: "The millions I'm spending on firewalls and antivirus technology is relatively worthless if my adversary is skilled."
You're not alone. Part of the reason our main protective technologies are in such a miserable state is that the IT community continues to pay for ineffective technology, and as a result, many of the large vendors have failed to spend the money it takes to innovate.
Again, we don't suggest stopping the use of stock security technologies. But there's a case to be made for finding the least-expensive supplier of specific commoditized security technologies and redirecting energies and funds in more innovative directions.
Note that during our endpoint protection testing we were actually surprised to find that there is some variance between the effectiveness of various antivirus detection engines. Our expectation going in was that there would be minimal to no difference. However, once we saw the AV engines universally fail at detecting our custom malware, we realized that, given the profile of our adversaries, the engine variance is irrelevant.
4| Know when security products can't help you.
Oddly enough, the final part of this conundrum is to recognize when a technology-based fix isn't the answer, or at least not the full answer. Many of the challenges companies face in securing sensitive data are just old-fashioned, need-to-roll-up-the-sleeves problems.
For example, there has been a fair bit of speculation about the effectiveness of Web application firewalls (WAFs) and whether they can help offset Web application security flaws. There is little argument that Web application security flaws are a big risk, a primary attack vector, and an ongoing problem. But does investing in WAF gateways alleviate the need to fix the vulnerabilities in your Web applications?
"The products are similar to an IDS/IPS," says Andy Hoernecke, a consultant at a Fortune 500 retailer who has spent significant time testing WAF gateways. "Perfectly tuned, all signatures enabled, custom signatures created for specific applications, and under the absolute best of conditions, some WAFs might be able to identify attacks 50% to 75% of the time."
Those are better numbers than the 20% to 30% range we see for other security technology sets, but are WAF gateways a replacement for addressing these vulnerabilities? We think not; spending millions on these gateways might buy you a little time, but it isn't going to mitigate enough risk. The hard work of building effective assessment and QA processes and implementing developer training programs is still essential.
Another shortfall we see is that few companies have launched comprehensive processes to patch vulnerable client-side programs like Web browsers, Adobe Reader, Adobe Flash, Java JVMs, and other third-party components that continue to be targets. Most companies have the tools to push out patches, but they haven't built the processes to make the best use of these packages.
The problem is particularly troublesome for small and midsize companies that typically haven't invested in comprehensive vulnerability management processes. This omission could start proving costly.
In a Wall Street Journal article on the recent Zeus-related arrests, Russell Brown, an FBI special agent in the bureau's cyber division, was quoted as saying the ring focused on accounts owned by municipalities, churches, and small and midsize businesses because of their security and technology limitations. Unfortunately, it appears that in 2010, if you manage, move, or have access to money, you already are a target, or soon could be. Keeping endpoints patched still matters.
Fortunately, a little elbow grease and maximizing systems you probably already own can go a long way toward reducing some of these risks.
On The Horizon
Earlier this month, Symantec announced its "Ubiquity" technology, a modernized approach to endpoint protection that adds reputation options to its enterprise antivirus complement. Symantec started the research behind Ubiquity four years ago, when it began to recognize alarming trends in malware creation rates and obfuscation techniques. Using anonymized data from its sizable customer base, the vendor has started tracking the age, source/publisher, and other usage trends for billions of files in an effort to come up with a relative "safety rating." In a nutshell, Symantec is hoping that by leveraging a broad community of users, it can help customers make smarter choices about good and bad applications. For example, say you have an app that has no known publisher, virtually no users, and was first spotted four hours ago. That might be something you want to block.
While the Ubiquity idea is horrendously late and still unproven, we applaud it--it's definitely a step in the right direction.
For its part, McAfee completed its acquisition of application whitelisting vendor Solidcore Systems earlier this year. It's also working on reputation technology, though it's arguably just as behind the curve as Symantec. We'll see if other security technology vendors start on the long road to catching up.
Of course, technology is only a small part of the overall security challenge, but it remains a relevant and key element of our IT risk management strategy. Originally, firewalls were supposed to keep bad things away from the network, antivirus products were supposed to prevent bad things from running on our endpoints, and intrusion detection systems were supposed to detect, well, intrusions. And for many years, they did, to some degree, in an ever-shrinking set of scenarios. But those days are coming to a close. If we remain bound to our relentless commitment to mediocrity, we will be worse off moving ahead. We can and must do better. It's time to change our way of thinking.
Greg Shipley is an InformationWeek contributor and a former CTO. He has spent the last 15 years consulting within the Fortune 500 community on information security matters. Write to us at firstname.lastname@example.org.
The In Crowd: Technologies that account for more than $2 billion in spending per year
> Identity management
The Wallflowers: Account for less than $2 billion but more than $500 million per year
> Full disk encryption
> Intrusion detection and prevention
> Patch management as well as security information and event management
Not Even At The Dance: Account for less than $500 million per year
> Advanced malware and botnet detection
> Application whitelisting
> Database activity monitoring
> Data loss prevention
> Data masking and tokenization
> IT governance, risk management, and compliance tools
> Network traffic capture and forensic technology
> Web application firewalls
> Web application scanners
Data: InformationWeek Analytics