Endpoint
8/2/2012
08:21 PM
Robert Graham
Robert Graham
Commentary
50%
50%

Antivirus And The Wisdom Of Cabbies

Viruses that cabbies -- like the one who drove me to Def Con -- complain about are precisely those that antiviruses can't clean

Last month en route to Def Con, in the taxi from the airport, I mentioned that I was in town for a "hacker convention." The cabby immediately started asking for technical support to help remove a "Trojan" (his words) from his machine. I asked him whether he had an updated antivirus. He said he had; he originally had AVG that caught nothing, and then his son installed a new antivirus that warns him of the infection (and claims to clean it) every time he turns on his computer. He sees this as an improvement, but it still hasn't gotten rid of the virus or fixed the fact that his browser home page is constantly redirected to foreign advertising sites (most aren't even in English).

This is, of course, not unusual. Everyone in our industry experiences much the same thing when getting into taxies. About half the time I tell cabbies I'm in cybersecurity/hacking, I get a similar story: They are infected, and they have the latest antivirus, but it doesn't work as well as they'd hope.

It's easy to say something generic like "antivirus is broken," but the problem is more complicated than that. Antivirus products just deliver what the market asks for. We are the market; therefore, the problem really is that we are the ones who are broken.

The most common way we (the market) measure viruses is by running them through a "wild list" of real viruses. This is nonsense. The percentage a product catches of this list is almost wholly unrelated to the percentage of the real-world threat end users experience. The normal wild-list score is above 95 percent, but a product that catches only 50 percent on this test could, in fact, do a better job in the real world.

The problem is timing. By the time the antivirus update that catches a virus reaches your machine, you've already caught the virus.

And antivirus does a horrible job at clean-up. Sure, they can quarantine some files, but a lot of times it's impossible, and the virus stays around, as in the example above.

This leads to the "Baysian Cabby Effect" -- the viruses that cabbies complain about are precisely those that antiviruses can't clean and just leave on their system for years. Thus, while uncleanable viruses may only be small percentage of actual viruses, they become a dominant reason why people are unhappy with antivirus products.

Another problem is that antivirus can't (easily) fix stupid. Cabbies complain about the Trojans that hackers put on their machines. They use that word -- I'm not they know what it means. Like the citizens of Troy who brought the horse statue inside their gates, a Trojan is something you, yourself, put on your machine. It's like getting syphilis and saying, "But I don't know how that happened."

It may seem totally unfair to judge antivirus by how well they deal with stupid users, but isn't that the point? Experts can avoid viruses, detect them, and remove them without the aid of antivirus programs. The entire point of antivirus is to protect inexperienced users from themselves. We therefore need to stop evaluating antivirus on how good a job they do for experts, but how good a job they do for everyone else.

I'm sure you've heard the following joke, but I'm going to repeat it anyway: A passerby notices a guy looking for his keys under a street lamp. He asks, "Where did you lose your keys?" The other says "I'm pretty sure I dropped them in the grass over there." Passerby asks, "So why are you looking here?" The guy answers, "Because the light is better." Measuring antivirus by the wild-list is the same concept. We choose criteria that are easy to measure, but not the ones that matter.

So the conclusion is this: Antivirus is going to continue to perform poorly, cabbies are still going to complain, and it's as much our (the cybersec community) fault as it is the antivirus companies.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Latest Comment: nice one
Current Issue
Flash Poll
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2015-1235
Published: 2015-04-19
The ContainerNode::parserRemoveChild function in core/dom/ContainerNode.cpp in the HTML parser in Blink, as used in Google Chrome before 42.0.2311.90, allows remote attackers to bypass the Same Origin Policy via a crafted HTML document with an IFRAME element.

CVE-2015-1236
Published: 2015-04-19
The MediaElementAudioSourceNode::process function in modules/webaudio/MediaElementAudioSourceNode.cpp in the Web Audio API implementation in Blink, as used in Google Chrome before 42.0.2311.90, allows remote attackers to bypass the Same Origin Policy and obtain sensitive audio sample values via a cr...

CVE-2015-1237
Published: 2015-04-19
Use-after-free vulnerability in the RenderFrameImpl::OnMessageReceived function in content/renderer/render_frame_impl.cc in Google Chrome before 42.0.2311.90 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger renderer IPC messages ...

CVE-2015-1238
Published: 2015-04-19
Skia, as used in Google Chrome before 42.0.2311.90, allows remote attackers to cause a denial of service (out-of-bounds write) or possibly have unspecified other impact via unknown vectors.

CVE-2015-1240
Published: 2015-04-19
gpu/blink/webgraphicscontext3d_impl.cc in the WebGL implementation in Google Chrome before 42.0.2311.90 allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted WebGL program that triggers a state inconsistency.

Dark Reading Radio
Archived Dark Reading Radio
Join security and risk expert John Pironti and Dark Reading Editor-in-Chief Tim Wilson for a live online discussion of the sea-changing shift in security strategy and the many ways it is affecting IT and business.