Security blame game makes it easy to point the finger at 'dumb' users, but the delivery mechanisms of today's undetectable Web malware will get past even the savviest and most educated users

Dark Reading Staff, Dark Reading

March 25, 2013

5 Min Read

Too many security pros today take it upon themselves to blame "dumb" users for all of the security ills befalling IT organizations, particularly when users fall for phishing and other email-based attacks. But many of today's most advanced attacks are through Web channels and are so well-designed that even the most advanced training techniques may not teach users how to detect them. According to many security pundits, it is time to stop with the blame game and look in the mirror.

"I honestly don't see the value of employees being listed as the weakest link, mainly because humans are the ones doing everything to begin with. It's either poor implementation of systems, insecure coding, or maybe not taking into account security controls," says Tim Rohrbaugh, chief information security officer at Intersections Inc. "Humans are involved in every facet, so I don't fault users specifically."

Simply pushing generalized user training isn't enough to stop hacks that take advantage of the human element, say pundits like Rohrbaugh. IT has got to augment that with better techniques that leverage some common malware behavior indicators, to try to find ways to offer up more visual cues for users to detect problems and use these techniques to offer up more immediate lessons when user behavior does lead to security incidents.

"I think that there's definitely that kind of need to have folks trained, but also IT needs to be able to see when something is amiss--to find out what are the canaries in the coalmine to look for, to say something is wrong with this user's machine," says Wade Williamson, senior research analyst for Palo Alto Networks. [Why isn't database activity monitoring taking hold in the enterprise? See Five Hurdles That Slow Database Security Adoption.]

According to Williamson, general training on basic security hygiene definitely has its value, but the long-term value "has a pretty abrupt ceiling," he says.

Palo Alto today released some new research that gives evidence as to why that ceiling exists. In the Modern Malware Review, the company examined 26,000 samples of unknown malware, cross-checking them against all the major AV applications to ensure they were not detected and met the conditions of the study. Interestingly, those programs that made the cut as completely undetected tended to be dominated by Web-based distribution sources rather than email-based.

"Once you cleaved out the stuff that was completely undetected, a lot of that email malware went away and you were left with a lot of these sources that came from Web browsing, file transfer applications, things like that," Williamson says.

Many of the samples studied in the survey were embedded in the browser, making it extremely difficult for even the most knowledgeable users to detect them.

"When you can get malware that is injected in that level, then distinguishing that normal browser behavior from the malware behavior gets really tough," he says. "I talk to a lot of security managers and there is just this embedded belief that there's so many stupid users out there, but the game has become so sophisticated that you can't expect a user to know what's going on behind the curtain all the time."

Part of the issue is that over the last decade or so, in the name of usability, IT has done its best to build applications so that it obscures everything that happens behind those curtains, Williamson says. Things are more automated than ever, passwords are saved automatically in browsers and elsewhere, and the user gets no visibility into how things work. That level of obscurity can be a real detriment to security and is one big reason why security professionals should abstain from beating up typical users from making bad choices.

Take, for example, the issue of trusted communications, says Rohrbaugh. There are mechanisms like email-signing that help knowledgeable users better understand risks when opening emails. But the way these mechanisms are presented aren't visible enough to help the average user.

"So you've got half the problems solved, but the other half is giving the visual cues to the employee/consumer so that they can make a decision and say, 'Risky? Not risky?'" Rohrbaugh says. "People pick on the general employee, saying they're the weakest link. But it really needs to be done in such a way where people can understand: 'That is a big hole in front of you with spikes down in the bottom and I wouldn't step on it.'"

But even with better visual cues and less obscurity in risk factors, as sophistication and targeting get better, cyber lures become near impossible to detect, to the point that a bad click becomes nearly certain, says Williamson, who says even the best training can't possibly make that risk go away. Which leads us back to the discussion of better controls and smarter detection techniques—essentially looking for those canaries in the coalmine that Williamson referenced.

Rohrbaugh agrees that training is just one part of the defense-in-depth strategy. "For the most part, you've got to establish controls," he says. "And just expect that consumers of that employees are going to make the wrong mistakes."

One interesting indicator that Palo Alto unearthed in its report was the intersection of between malware communicating back to C&C servers using HTTP POST and that malware relying on new URLs for distribution.

"There's a cool intersection there because Web applications are hardly ever on new or unknown domains," Williamson says. "So you can take that very simple information and you don't necessarily have to block all unknown domains or all HTTP POST, but if you see those things together, that's a bad thing."

It is those kinds of indicators and best practices around them that organizations should be seeking to give them early indication that problems are occurring. And then, once the problem is identified and a root cause is found, rather than simply blaming a user, make it a teachable moment.

"The biggest area for improvement is notifying the employee in the moment that they have risky behavior," Rohrbaugh says. "People just need to know often and right on time, right when it happens, that their actions are being monitored, and that it's putting the company at risk."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights