When Will End Users Stop Being Fooled By Online Scams?
Despite millions of dollars in security tools and hours of awareness training, many organizations still find themselves breached by phishing and old-school social engineering attacks. Is there a way to build a better, smarter user?
On a chaotic workday, a top executive scans hastily through dozens of emails that have arrived in the past 10 minutes. There is one from an IT staffer whose name he doesn't know -- he doesn’t know most of the people in IT -- and it states that he needs to do a password reset or he will lose access to his applications. Without thinking, he clicks on the link provided in the email -- and malware is introduced to the entire corporate network.
Every day, employees in enterprises large and small are faced with attacks similar to this one. Fake emails -- or website messages, phone calls, or texts -- that appear to be legitimate elude anti-spam software and Web content filters to arrive at the employee's desk. These fraudulent messages -- collectively known as social-engineering attacks -- are quickly becoming the entre of choice for cybercriminals, both for the most sophisticated attacks and for everyday spam.
More Security Insights
- 10 Steps to Cleaning up Active Directory
- The Active Directory Management and Security You've Always Dreamed of
- Innovations in Integration: Achieving Holistic Rapid Detection and Response
- COBOL in the Big Data Era: A Guide
"The social-engineering attacks out there have become more sophisticated than ever," says Dan Waddell, senior director of IT security at eGlobal Technology and a member of the board at (ISC)2, the world's largest association of security professionals. "Cold calls, social-engineering emails, Facebook attacks -- they're getting better all the time, and it's not unusual to see a major breach starting with a targeted spear-phishing attack."
Researchers confirm that phishing -- those fraudulent emails that deliver malware or lead users to the wrong websites -- is on the rise again. According to RSA's May 2012 Online Fraud Report (PDF), instances of phishing were up 86 percent in April, reaching their highest level since September 2011.
The driver behind this growth is simple: People are much easier to fool than computers. While software vulnerabilities or weaknesses in security systems are becoming more difficult for cybercriminals to find and exploit, a single gullible user can introduce a world of trouble into an organization with a single mouse click. Major breaches at RSA, Zappos.com, Sony, and many other organizations have been launched with a single successful targeting phishing attack.
"Social engineering has reached pandemic proportions, yet it’s one of the most ignored attack vectors in security strategies today," says Rohyt Belani, CEO of PhishMe, a service that enables companies to train and test their employees about phishing through simulated attacks. "Both cybercriminals and penetration testers are now saying the same thing: The human element is the weak point in any sort of cyberdefense."
"We have spent the past decade deploying a large number of security controls and investing in protecting servers and applications -- for right now, the user is the easiest target," says Mike Murray, managing partner at MAD Security, a security firm that focuses on modifying the behavior of end users to make client organizations more secure.
While software can be scanned for vulnerabilities, and cyberdefenses can be penetration tested, there are no technological ways to test and patch end users for security weaknesses, experts observe. For many enterprises, then, the question becomes: How can users become smarter and more savvy to potential social-engineering attacks? Is there a way to make a better user?
A growing number of security companies and consultancies are focusing on that very question. Chris Hadnagy, a professional social engineer who has spoken on the topic at the annual Black Hat USA conference, says that organizations need to move security awareness out of the classroom and into users' minds and desktops.
"Almost every company has a security awareness program, but we see more and more of them being compromised all the time, sometimes with the same exploits that have been used for years," says Hadnagy, who also helps run a social-engineering "capture the flag" contest at the Def Con conference every year.
"Why is security awareness training so ineffective? A lot of it is because the training programs themselves are ineffective," Hadnagy explains. "They're impersonal, boring videos or [computer-based training] given mandatorily in classrooms where people spend the whole time texting or IMing. The [employees] are not engaged. They’re not learning anything. And so they make the same mistakes over and over."
Tim Rohrbaugh, vice president of information security at identity theft protection company Intersections, agrees. "Despite a lot of talk about security and breaches, the typical user is as unaware and unconcerned as they’ve always been," he says. "There are user education programs, but the incentives aren't there to get users to really change their behavior. People are still not very good at filtering what’s real and what isn't."
While many security departments try to treat the human problem with technology -- through spam and content filters, as well as tools that simply prevent users from accessing data -- there is a growing wave of experts that are attacking the problem from a human perspective. The key, they say, is to change both the environment that employees work in -- their corporate culture -- and the way they learn about security.
"When we do social-engineering testing, one of the things we find is that employees behave better in companies that really care about security," Hadnagy says. "In a lot of cases, there is a direct correlation between the amount of money the organization spends on security and how their users fare in social-engineering tests. When the organization cares about security and is willing to invest in it, then their employees usually do, too."
Next Page: Instilling a healthy suspicion of the unknown.