|Click here for more of Dark Reading's Black Hat articles.|
"We're shedding light on some areas where applications themselves -- and the technologies used in them -- have kind of moved on," says Nathan Hamiel, one of the speakers. "But the tools used to test and identify vulnerabilities in those applications haven't really moved on yet.
"When something new [in app development] comes out, developers want to use it, they want to integrate it, they want to be first to develop with it," Hamiel observes. "And testing tools kind of lag behind on that front."
According to Hamiel and his fellow researchers -- Justin Engler, Seth Law, and Greg Fleischer, all of them consultants for FishNet Security -- most automated tools today offer only a limited scope of testing. The speakers will assert that believe that applications and testing data need to be analyzed by people -- not tools -- to find the broadest range of impactful vulnerabilities.
"At the end of the day, tools don't find vulnerabilities. People do," Hamiel says. "[Tools] point a knowledgeable person in the right direction to identify whether or not a vulnerability exists. That's lost in translation when people are spending a lot of money on these testing tools."
Others industry experts agree.
"From login mechanism flaws, certain input validation and session management weaknesses, weak passwords and even gotchas in application logic -- there's just too much for the typical tools to uncover," Kevin Beaver, owner of the security consultancy Principle Logic, wrote recently on the topic. "Ditto with mobile devices and other complexities associated with network infrastructures."
Security testing resources are stretched so thin that organizations can only do so much manual testing, which is why semi-automated testing has become popular, the speakers say. But such automation becomes more difficult as people try to leverage the tools in unorthodox ways or write custom scripts to help them do better application inspection.
"We're pointing out common areas for problems, such as the injection of randomized data. For cross-site request forgeries, people have begun using random tokens to protect against that. Sometimes, those random tokens are generated on a per-page basis," Hamiel says. "You might be running a test case and maybe only the first request you send is valid. Every other request that you send is invalid. Those are problems that crop up that need to be identified and handled."
Hamiel and his fellow researchers hope to contribute to the collective testing effort by offering a free tool that they say will help address some of the limitations of current fuzzing tools. "We didn't want to write an inspection proxy, and we didn't want to write it in Java. If you look at a lot of current tools -- especially open-source tools and even some commercial tools -- that's kind of the approach they take," Fleischer says. "As a user of that type of tool, you still need to use a Web browser, walk through that application, and do all this heavy-lifting work -- especially with modern Web applications that are very focused on AJAX and other sorts of rich-client technology."
Fleischer says he and his cohorts decided to use Python and to embed the Web browser into the tool.
"[This] can give us better introspection into the overall cage and how data is transferred between the tool and the application and back," he says. "That's where a lot of the time and effort was focused."
Most importantly, says Fleischer, the tool was designed to speed up the analysis of testing output generated by custom scripts. This is critical in that sweet spot of semi-automation, where organizations need to test for particular circumstances but want to do it through custom scripts. The output from those scripts can be difficult to analyze and requires manual analysis or the development of a custom analyzer, he says.
"With our tool, we put the output in this common format, import it into our analysis, then run analysis on it," he explains. "It takes those outputs and starts from there, so it gives you kind of a leg up instead of having to write your own custom analyzers every time you write a custom script. If you target a specific import format, then you can leverage all the work that we've done to build an intelligent analysis."
The FishNet foursome will demo their new tool at Black Hat Arsenal the day before their session and also at their presentation on Thursday. They will release source code immediately following the session, and they plan to release installable packages for Mac and Windows after the show.
Have a comment on this story? Please click "Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.