'Fight Club' Aims to Test Pornography Filters

Last year, it proved that antivirus products don't stop all viruses. Now Untangle prepares to show how much porn can escape 'safe' content filters

3 Min Read

Untangle next week at the RSA conference will conduct a "fight club" to see how well six leading Web content filters really are at stopping pornography from reaching the user.

Untangle, an open source gateway provider, conducted a similar shootout among antivirus products last year and generated some surprising results. Only three of the AV products in the test stopped all 25 viruses; one product stopped fewer than 10 percent. (See Antivirus Tools Underperform When Tested in LinuxWorld 'Fight Club'.)

This year, Untangle will test some of the best-known content filters, which are used by both enterprises and parents to prevent users from accessing objectionable material. Products from Barracuda, Fortinet, Scansafe, SonicWall, Watchguard, and Websense will all be included in the test.

Although such filters are capable of screening all types of objectionable content -- including gambling sites, hate sites, and other material that may offend users or inhibit productivity -- the Untangle test will focus exclusively on pornography. In the test, operators will use PCs protected by the various filters to search for 5,000 popular porn URLs, and then find out whether the PC is able to access them.

"We chose porn because it's the most visceral of the content that these filters need to screen out and because porn sites are becoming an increasingly popular platform for launching malware," says Dirk Morris, CTO and founder of Untangle. "Allowing porn in your enterprise can not only create legal and moral issues, but it's rapidly becoming a real threat to security."

Untangle officials declined to comment specifically on the results of their early tests of filtering products, but anecdotally, they have seen some significant shortcomings in content filters.

"Most of these filters work by creating large databases of disallowed material, but not all of these databases are complete," Morris says. "Plus, there are new sites popping up all the time."

If the tests prove that content filters aren't completely reliable, some companies may choose to change their strategies toward filtering, Morris suggests. "Some organizations are taking more draconian approaches, such as keyword filtering or whitelisting," he observes. "That can be problematic because you're going to get false positives. But it might make sense for organizations like schools, where it's critical to keep the objectionable material out."

Content filtering probably will never be 100 percent effective, Morris says. "There will always be ways for the content to get through, and there will always be users who try to get around the filters."

"Really, these types of products are only part of the solution," he says. "You need to tell your employees that their Web surfing is being tracked, and you need to tell them that these sites can be a source of malware. And they need to be asking themselves whether surfing porn is really a proper activity for the workplace."

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Tim Wilson, Editor in Chief, Dark Reading

Contributor

Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech's online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one of the top cyber security journalists in the US in voting among his peers, conducted by the SANS Institute. In 2011 he was named one of the 50 Most Powerful Voices in Security by SYS-CON Media.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights