Vulnerabilities / Threats
1/21/2011
02:58 PM
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Google Acknowledges Web Spam Complaints

Low-quality content has some Internet users worried about the relevance of Google search results.

Google on Friday tried to quell grumblings about the quality of its search results.

In recent months, prominent bloggers and tech news sites have noted many instances in which Google searches returned poor results. Programmer Jeff Atwood, who runs the popular Coding Horror blog, characterized the volume of complaints as "deafening" lately.

The issue is Web spam. Google principle engineer Matt Cutts, who runs Google's Web spam team, defines Web spam as "the junk you see in search results when Web sites try to cheat their way into higher positions in search results or otherwise violate search engine quality guidelines."

Web spam is a critical issue for Google, perhaps to the point that it imperils Google's search business. If low-quality content continues to find prominent placement in Google's search results and generates enough revenue -- through Google ads, third-party ads, or direct sales -- to fund further Web spam creation, users will slowly but surely turn to other means of content discovery. Social search is often mentioned as a contender in this scenario, which explains why the rise of Facebook has Google worried.

What makes Web spam particularly pernicious is that it's not as easy to identify as malware. Web spam runs the gamut, from blatant attempts to trick Google with unlawfully copied content and repeated search keywords to low-quality writing produced by so-called "content farms."

Cutts's response to the growing chorus of criticism is simultaneously to deny the accuracy of the complaints and to offer assurance that further steps to stamp out Web spam are being taken. Google's search quality is "better than it has ever been in terms of relevance, freshness and comprehensiveness," he insists, even as he acknowledges there has been "a slight uptick of spam in recent months," which Google's engineers are addressing.

Cutts cites a number of steps Google has taken to beat back Web spam, to identify hacked sites, and to alter its search algorithm to deemphasize low-quality Web sites. And he stresses the fact that being a Google advertising client doesn't buy a better search rank.

Cutts concedes that Google can and should do better, even as he suggests that users' perception of the prevalence of Web spam may be the result of "skyrocketing expectations."

The trouble is that Web spammers are trying to do better too.

If Google is to prevail, it may have to look beyond the security arms race, where stalemates rather than victories seem to be the norm, and forgo some ad revenue in order to starve the content farms that feed from Google's trough.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-2808
Published: 2015-04-01
The PRNG implementation in the DNS resolver in Bionic in Android before 4.1.1 incorrectly uses time and PID information during the generation of random numbers for query ID values and UDP source ports, which makes it easier for remote attackers to spoof DNS responses by guessing these numbers, a rel...

CVE-2015-0800
Published: 2015-04-01
The PRNG implementation in the DNS resolver in Mozilla Firefox (aka Fennec) before 37.0 on Android does not properly generate random numbers for query ID values and UDP source ports, which makes it easier for remote attackers to spoof DNS responses by guessing these numbers, a related issue to CVE-2...

CVE-2015-0801
Published: 2015-04-01
Mozilla Firefox before 37.0, Firefox ESR 31.x before 31.6, and Thunderbird before 31.6 allow remote attackers to bypass the Same Origin Policy and execute arbitrary JavaScript code with chrome privileges via vectors involving anchor navigation, a similar issue to CVE-2015-0818.

CVE-2015-0802
Published: 2015-04-01
Mozilla Firefox before 37.0 relies on docshell type information instead of page principal information for Window.webidl access control, which might allow remote attackers to execute arbitrary JavaScript code with chrome privileges via certain content navigation that leverages the reachability of a p...

CVE-2015-0803
Published: 2015-04-01
The HTMLSourceElement::AfterSetAttr function in Mozilla Firefox before 37.0 does not properly constrain the original data type of a casted value during the setting of a SOURCE element's attributes, which allows remote attackers to execute arbitrary code or cause a denial of service (use-after-free) ...

Dark Reading Radio
Archived Dark Reading Radio
Good hackers--aka security researchers--are worried about the possible legal and professional ramifications of President Obama's new proposed crackdown on cyber criminals.