4:48 PM -- Google and I have never seen eye to eye. Its a long history that spans half a decade, and youd probably have to give me lots of free alcohol to tell it all. The short version: Ive never been a fan of Googles stance toward security and privacy.
Not everyone agrees with me. Some people have given Google security accolades based on partial information or anecdotal evidence. But these people simply are not looking at all of the facts available to them.
First, let's agree that Google is just a Web page. Sure, it's a big Web page, with lots of features. But in the end it's just a Website that is designed to market ads to users. Its also a Website with lots of flaws -- flaws that are peppered throughout the site's code.
Based on my research, I can say that Google's vulnerability rate is no better or worse than its competitors' -- it's just about the same. Still, that means almost every time I log onto Google or its competitors -- or use any of their freely-available tools -- I find flaws. Its a sad fact: Almost no one is writing good code out there.
More than two years ago, I attempted to explain to Google why some of its services were making it ridiculously easy for phishers to operate. This wasnt a theory -- I was seeing it happen. I approached Google privately, and its IT people verified the issue I raised.
Rather than fixing the flaw, however, Google put a blacklist in place, which meant that Google only reacted to the problem upon noticing it. This didn't stop the abuse. I tried very hard to get Google's people to realize how they were harming consumers. Yet, they were steadfast in their opinion that it was better to leave the hole intact. Better for whom?
More than a year later, it appeared that Google finally got the wakeup call, experiencing abuse after abuse. Google knew about the hole for more than a year before fixes began to roll out. We know this because the company did build a blacklist.
Yet, during that time, Google actively chose not to completely close the vulnerability -- despite the fact that untold thousands of people could potentially have been compromised by that hole. For a company that professes its belief in doing no evil, Google is awfully concerned with its bottom line -- regardless of potential consumer safety issues.
Side note: That hole is still available on the site in dozens of places. Its still not completely fixed, even years later.
Last week, I found another vulnerability in gmodules.com. Attempting to disclose the flaw properly, I explained this hole to Google, which responded with just a tinge of arrogance.
Google attempted to explain to me why this flaw couldnt be used to steal cookies -- but thats absolutely not what phishers would use it for. Phishers would use it for building phishing sites. However, rather than listening to me -- a person who has been trying to get Google to close its holes for years -- Google's people shrugged it off as a non-issue and closed the bug entirely.
Therein lies my biggest beef with Google. Rather than admitting its flaws, taking the proper mitigating steps, and protecting consumers, Google officials lie about what is -- and is not -- a vulnerability. It opts not to fix these vulnerabilities, leaving consumers to fend for themselves.
Worse yet, on some of Google's domains, the most critical tools for protecting the consumer -- anti-phishing technology -- has been completely shut off. And why has Google shut it off? To protect its brand -- not the consumer.
Ive found countless holes in Google Desktop, yet the company still has never formally acknowledged that these are real flaws. I guess after months of tough debate, it must still be "looking into it." Even though these flaws could lead to your computer being compromised, Google has never issued a warning to that effect.
But dont just take my word for it -- listen to what Google executives say publicly about the company's security policies. In a Wall Street Journal article, Google CIO Douglas Merrill was pretty clear:
Regarding security-flaw disclosure, Mr. Merrill says Google hasnt provided much because consumers -- its primary users to date -- often arent tech-savvy enough to understand security bulletins, and find them "distracting and confusing." Also, because fixes Google makes on its servers are invisible to the user, notification hasnt seemed necessary, he says.
We can be sure that Google's people know about the site's vulnerabilities, because we tell them. We also can be sure that Google officials know its consumers are being exploited-- again, because we tell them. Yet, Google opts against fixing the vulnerabilities -- or even informing consumers of the risks.
For anyone who thinks Google is a company whose security policy is admirable, I hope this is a wake-up call. Google is not good at security -- what it's good at is hiding its problems so people get a false sense of security when using its sites or tools.
Perhaps this should be considered a how-to-guide on what not to do in your own company when it comes to vulnerabilities and public relations with security researchers.