Cybercriminals will do nearly anything to make a buck, using social engineering lures like natural disasters or the recent injury of University of Florida football player Tim Tebow to distribute malware and infect computer systems. And the more the better.
The reality is that security vendors cannot keep up with the threat. Both corporate and consumer users can easily fall victim to malware and bot infections because these social engineering techniques are so good at duping them. And corporate systems, meanwhile, are being targeted by smaller and stealthier botnets, ranging from just a few hosts to a few hundred, affecting up to 9 percent of computers in an enterprise, according to Damballa.
Keeping bots and scareware from gaining a foothold in your enterprise network requires a layered approach, or defense in-depth. It also requires an active effort by security professionals to ensure their protection measures are updated at all times, often supplemented by bleeding-edge research and budget-friendly open-source tools. Leveraging open-source tools, such as the Squid Web caching proxy, with layered access control, content filtering, and security intelligence, can help provide an edge in the battle against bots and scareware.
The two major sources of external threats to enterprise networks are e-mail and the Web. Because of the ridiculous volumes of spam, nearly every enterprise has something in place to filter their e-mail, but not so for the Web. Why aren't companies rallying to stomp out Web-based malware like they did with email spam?
Until that revolution happens, we are left with varying offerings for Web content filtering using technology that has been around and actively used for years. The Squid Web caching proxy is probably the best known and most widely used free, open-source Web proxy available. What's ironic is that although Squid is free, many enterprises have paid for it and are using it right now -- not realizing it's embedded in their commercial Web filtering appliances and used by popular Web sites like Flickr.
Squid can perform content caching, access control, authorization, and logging. Many add-ons have been developed to extend Squid's functionality to include security features like antivirus scanning. It's the flexibility of Squid that allows it to layer those controls into an effective Web filter to block malicious Web sites and potentially harmful executables.
The most basic blocking functionality is based on file extension, IP address, and URL. How many of your users need to be downloading executable files? That number is likely to be few to none, so using Squid makes it easy to block files ending in .exe, .com, .bat, .msi, etc. There are ways around this method, but it is a basic filtering technique within Squid that can stop most of the scareware downloads we've seen and analyzed today.
Taking access control to the next level, IP addresses and URLs (or just domain names) known to be hosting malware can be easily blocked. There are numerous resources that provide security intelligence on malicious sites. DNS-BH provides a malware-domain blacklist based on information from several leading malware researchers and groups. A new update to the DNS-BH released this week included more than 130 new domains hosting scareware and rogue antivirus, which seems small considering its current listing includes more than 16,000 malware-related domains.
Antivirus is the last layer in Squid for catching the malware that gets by the other layers. Add-ons like Viralator, or passing the traffic off to another proxy like HTTP antivirus proxy HAVP, allow usage of antivirus software like ClamAV, F-Prot, AVG, Kaspersky, Sophos, Trend Micro, Nod32, and McAfee. The recent reports from NSS Labs on performance of antivirus solutions indicate that Trend Micro and Kaspersky Labs anti-malware tools would be great choices.
When all of those layers of protection are combined, a Squid-based Web proxy is a powerful solution for preventing bots and scareware from getting into your network through Web-based channels. It's important to remember that the security intelligence feeding those blacklists changes often, so security teams need to stay on top of it and automate the update mechanisms.
But Web filtering is just one layer of defense. It's based on blacklisting mechanisms, which are not foolproof. You also such need technologies as intrusion prevention systems, proper security practices (like principle of least-privilege), and application whitelisting on desktops.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.