Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

7/8/2010
09:09 PM
Connect Directly
Google+
Twitter
RSS
E-Mail
50%
50%

New AV Product Testing Methods Stir Debate

Antivirus vendor-backed group says its proposed lab testing standards will provide a more fair and accurate representation of AV products, but not everyone agrees

Under one of the new guidelines recently released by the antivirus vendor-backed Anti-Malware Testing Standards Organization (AMTSO), AV vendors could actually end up scoring lower malware-detection rates.

The AMTSO's so-called "whole-product testing" guideline requires that labs no longer test each feature of an AV tool separately, but instead use all of its features against malware samples -- a method that could hurt the 90-percent-range scores many AV vendors had been accustomed to, in some cases, according to one AMTSO member. The goal is to eliminate testing individual features in AV products and instead test them more holistically.

"We came to the conclusion that this organization is a tough sell for marketing," says Role Schouwenberg, senior antivirus researcher for Kaspersky Lab, and a member of the AMTSO. "It is promoting tests that will give lower detection results than old-style tests," some of which earned scores of 95 percent detection rates and higher, he says. Whole-product testing could provide lower but more realistic detection rate information, he says.

"In the end, those detection results should reflect reality better ... and give users a much better understanding with results that mean more," he says. Some vendors could actually raise their scores under the new model, too, however, he says.

But the idea of vendors creating the standards for how their products are tested has left a bad taste in the mouths of some security experts, as well as NSS Labs independent testing lab director Rick Moy. "Are consumers being served? That's the big issue here. This industry has the motivation to perpetuate itself," Moy says. "I'm not bashing AV -- these products are important. But you get a false sense of security when vendors define the test."

AMTSO is made up a who's who of the AV industry, as well as several major independent testing labs. Among its members are AV-Comparatives, Avast Software, AVG Technologies, AV-TEST.org, Bit9 BitDefender, CA, F-Secure, ICSA Labs, Kaspersky Lab, McAfee, Norman, Panda Security, Sophos, Symantec, Trend Micro, Webroot, and West Coast Labs.

NSS Labs was an early participant in the group, Moy says, but dropped out. AV vendors haven't always been happy with NSS Labs' test results, he says, and many took offense to NSS Labs recommending that customers not buy products that scored low in its endpoint protection products tests, for instance.

Moy says NSS Labs testing includes information that AMTSO tests won't, such as how long it took a vendor to add protection for an exploit.

Meanwhile, AMTSO member Trend Micro today issued a press release today touting its OfficeScan product earning a "recommend" rating in a new NSS Labs test of endpoint protection products and its response to socially engineered malware. The product was one of the fastest in blocking a malicious website, according to the test results.

Jon Clay, senior core technology marketing manager at Trend Micro, says Trend supports both AMTSO testing and NSS Labs' testing: "AMTSO from the perspective that they are working toward a better methodology for testing anti-malware solutions with real-world testing methods, and NSS Labs from the perspective that they are one of the leading testing labs who have implemented a better real-world test of anti-malware products. Also, NSS Labs performs an unsponsored, independent test and all the vendors whose products are tested are also members of AMTSO. NSS Labs makes the determination of which vendors and products are part of their test," Clay says.

"We are only promoting the results since Trend Micro performed the best of all the vendors involved. Trend is part of AMTSO to ensure that organization keeps moving forward with recommendations -- and they are recommendations, not laws -- on how testing labs should build out their 'whole-product' tests. Trend Micro will support any testing lab that utilizes better testing methodologies that ensure solutions are tested properly and can take full advantage of all technologies to protect the data," Clay says.

Meanwhile, AMTSO members this week launched a blogging campaign in response to criticism that the group is too vendor-centric and to dispel concerns that the group doesn't like tests provided by non-AMTSO labs. Members including Kaspersky's Schouwenberg, Andrew Lee of K7 Computing, Luis Corrons of Panda Security, David Harley of ESET, Mark Kennedy of Symantec, and Igor Muttik of McAfee say in their group blog posted on their companies' sites that the AMTSO is not a vendor-lobbying organization.

"We find it strange that expertise in the testing field is somehow seen as a disqualification, given the specialist expertise that characterizes the group," they blogged. "The relatively high scores achieved in established tests by major vendors do not necessarily reflect real world performance, but real-world detection cannot be measured in terms of product comparison with no checks on selection, classification and validation of malicious samples and URLs."

They noted that the group does not have any problem with non-AMTSO labs that offer objective, real-world testing. They do take issue with charging for information about a test, they say. And they say they are all about transparency in the testing process.

"When a tester claims to have shared information about methodology in advance, and fails to provide methodological and sample data subsequently, even to vendors prepared to pay the escalating consultancy fees required for such information, this suggests that the tester is not prepared to expose its methodology to informed scrutiny and validation, and that compromises its aspirations to be taken seriously as a testing organization in the same league as the mainstream testing organizations committed to working with AMTSO," they blogged, urging cooperation among all labs to keep the process open.

Andreas Marx, CEO of testing lab AV-Test in Germany and a member of the AMTSO board of directors, says part of the problem with many existing tests is that the samples used to test AV detection aren't always actual malware and are corrupted. "Or the speed is just measured in a very simple way, [such as] 'scan C:\' and take the time. Such a test will not reflect the real world situation properly, as the virus guard is more important than the on-demand scanner," Marx says.

In one case, a tester said he would test the disinfection of different AV products. "What he did was that all the products got a set of files, he ran a scan, and counted the number of 'successfully disinfected' components as reported by the product," Marx says. But that doesn't check what happened with the files -- whether they had been successfully cleaned as well, he says. "He also did not keep in mind that some samples cannot be disinfected but only deleted -- malware that doesn't infect files, like worms or backdoors."

AMTSO is offering more of a best-practices approach rather than a standards-based one, he says. "They want to provide guidelines and best practices: 'if you want to perform a test in this area, please keep in mind the following aspects,'" for example, Marx says.

If a lab wants to perform testing on zero-day exploit detection, for example, then it should follow AMTSO guidelines, such as confirming that the sample is really working, for instance, he says.

"The best test would be to start at the browser or email level and see what your protection program is doing to prevent an infection," he says. "Such a test will cover a lot of different protection layers."

AMTSO has also issued guidelines for testing an AV tool's performance and speed, as well.

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...