Once upon a time, organizations primarily used Web gateways to prevent employees from wasting time surfing the Web — or worse, from visiting gambling, adult, and other unauthorized websites.
A few decades later, Web gateways do much more than enforce regulatory compliance and HR policies. Organizations rely on them to thwart Internet-borne threats in three ways:
- Advanced URL filtering, which uses categorization, reputation analysis, and/or blacklists to control access to categories of malicious or suspicious websites.
- Anti-malware protection, which uses various capabilities (such as antivirus, sandboxing, advanced threat protection, content inspection, etc.), to guard against infections caused by various kinds of malware (including rootkits, worms, Trojans, viruses, ransomware, spyware, adware, etc.).
- Application control capabilities, which manage and limit what users are allowed to do in specific applications.
However, although Web gateways have been around for decades and continue to evolve, they aren't bulletproof, and overreliance on them is putting data, users, customers, organizations, and reputation in harm's way. Here are five of the biggest Web gatway security challenges:
1. Filtering out malicious sites
Although URL categorization sounds appealing, this approach is actually very limited. To categorize malicious sites with 100% accuracy, Web gateways need to know how to identify even the most advanced threats. Unfortunately, the attackers' rate of innovation combined with frequent zero-day exploits are leaving Web gateways behind the curve.
To make things worse, it's also hard to keep up when 571 new websites are created every second, which generates a high volume of domains and increases the chance that some will be missed by security controls. It’s difficult for filters to detect the malicious URLs that attackers use for three reasons: URLs may be triggered only by the target organization and remain stealthy during categorization, they’re short lived (less than 24 hours), and they use dynamic domains that are harder to thwart than static ones.
2. Protecting against uncategorized websites without compromising productivity
Employees need access to information to be productive. However, many organizations block access to uncategorized sites because of security concerns, and in the process they reduce end user productivity. Not only does this practice hinder end users, but security teams are forced to deal with an onslaught of support tickets for users who legitimately need to access information. As a result, security teams find themselves maintaining a growing number of policies and rules. This is a major Web security problem because 1% to 10% of URLs can't be classified because of a lack of information.
3. Fighting infections from websites considered safe
The belief that infections occur only through websites that are categorized as suspicious or malicious is false. Websense estimates that 85% of infections occur through websites considered legitimate and safe. It's becoming increasingly common for so-called safe websites to knowingly serve malicious content.
A good example is "malvertizing," which injects malicious ads into legitimate online advertising networks later served by publishers that don't know that ads are malicious. These malicious ads may not even require any user interaction to infect unsuspecting victims. A recent example is the large-scale malvertising attacks that occurred in June and July this year against several Yahoo properties. To circumvent ad blockers’ ability to separate banner and display ads, some publishers are integrating ads into their general content. Others, including GQ publisher Condé Nast, insist that users disable their ad blockers in order to access content.
Then there's the fact that many seemingly safe websites use common content management systems that are vulnerable to zero-day exploits and can therefore be compromised by attackers to serve malicious content. In July, thousands of websites running WordPress and Joomla — which account for about 60% of all website traffic — served ransomware to all their visitors. And you may remember that back in early 2015, Forbes.com was breached by Chinese hackers who served malicious code via its "Thought of the Day" Flash widget.
4. Identifying malicious files and keeping them out
Although some Web gateways integrate antivirus engines and other file-scanning services, antivirus scanners detect only 20% to 30% of malware.
Leveraging sandboxes to detect malware requires time to run and analyze files. To avoid affecting user experience, Web gateways often pass files to users while sandboxes complete their analysis in the background — which essentially means users are exposed to attacks. Moreover, with the proliferation of sandbox evasion techniques and as malware is often target-specific, sandboxes are proving to be less effective.
5. Neutralizing malware on infected machines
Web gateways only analyze network traffic, not what users are actually doing. As such, gateways have a hard time differentiating between legitimate and malicious traffic, and detecting and neutralizing malware on infected machines. In fact, some advanced threats can be active for weeks or even months without being detected.
Indeed, recent research has found that 80% of Web gateways failed to block malicious outbound traffic. Remote access Trojans are another example of how Web gateways can't detect and stop malicious traffic.
Looking Beyond Web Gateways
Web gateways provide valuable functions inside security architectures and deliver basic security against threats arising from Internet browsing. But although they've improved considerably over the years, Web gateways are far from perfect. Their detection-based approach is failing, and as a result users are frustrated by draconian IT policies that block access to important websites. In the foreseeable future, Internet-borne threats will continue to evolve, and the industry must meet the challenge with new Web security defenses that help gateways do the job they were designed to do.