The test included both "point-and-shoot" (get the scanner/launch the scanner) and then again after the scanner has been "trained" (manually tweaked and fine-tuned).
Not surprisingly, the scanners produced better results -- fewer false positives and missed vulnerabilities -- after training.
App scanning, like any scan, can only go so far, and no matter how far that is, time-consuming human input (training and analysis, followed by more training and refinement) is an essential part of the process -- I'd say the essential part, and I'd best most of the developers would agree with me -- and likely the most expensive part of the process.
Vulnerability scans are becoming big business, and worthwhile business -- the best products get better constantly. But any vulnerability scan is a tricky thing to automate, and it's important to bear in mind that automation will -- and I think can -- only go so far. (That "so far" is, by the way, far less for Web app scanners, because of the nature and malleability of Web apps and the number of links they must crawl through.)
For these reasons alone, Suto's work --an earlier,and still well worth reading, look at Web app scanners is here (.zip file download) -- is important.
His work reminds us that vulnerability scanners are fine as far as they go, but that their operators -- your IT/security department or service vendor -- carry the real responsibility to build upon the scanners' efficiencies and make the tools truly effective for your business.
In other words, as with so many other things, once you've selected a product, it's all about your training and follow-through.