Project Sonar Crowdsources A Better Bug Killer

Scans of the entire Internet for known vulnerabilities turn up terabytes of data, but the next steps won’t be easy.

Mathew J. Schwartz, Contributor

September 30, 2013

4 Min Read
Dark Reading logo in a gray background | Dark Reading

9 Android Apps To Improve Security, Privacy

9 Android Apps To Improve Security, Privacy


9 Android Apps To Improve Security, Privacy (click image for larger view)

Can the vulnerabilities present in every public-facing network be recorded, identified and ultimately eradicated? That's the goal of Project Sonar, which was officially launched at the recent DerbyCon 3.0 conference in Louisville, Ky., which wrapped Sunday.

Project Sonar is the brainchild of HD Moore, who is chief research officer at Rapid7, as well as the creator and chief architect behind the Metasploit open-source vulnerability scanning tool, which can be used to test sites for known vulnerabilities. In a similar vein, Moore's latest project is focused on turning data gleaned from Internet-wide scans into actionable information for security professionals, developers and product vendors.

"Project Sonar is a community effort to improve security through the active analysis of public networks," said Moore in a related blog post. "This includes running scans across public Internet-facing systems, organizing the results, and sharing the data with the information security community." In one sense, the project focuses on spotting vulnerabilities not just for individual sites or Web applications -- as Metasploit does -- but in potentially every Internet-connected network.

[ How secure is your new iPhone? Read Apple Hackers Rate iPhone 5s Security. ]

So far, Rapid7 has released about 3 TB of raw data gathered from scans of IPv4 TCP banners and UDP probe replies, IPv4 reverse DNS lookups and IPv4 SSL certificates. It's now inviting other researchers to not just comb through the data, but generate and share their own.

What are the potential upsides of this type of project? For starters, having enormous amounts of information on the real-world vulnerabilities spotted in public-facing networks could help IT managers better prioritize their patch management plans, as well as hold vendors accountable for not just rapidly fixing vulnerabilities, but also ensuring that their customers are using the latest patches.

"Raising awareness about widespread vulnerabilities through large-scale scanning efforts yields better insight into the service landscape on the Internet, and hopefully allows both the community and companies to mitigate risks more efficiently," said Rapid7 security researcher Mark Schloesser in a related blog post.

Still, this remains relatively uncharted territory. Notably, Project Sonar is possible only due to recent big-data advances. "A few years ago, Internet-wide surveys were still deemed unfeasible, or at least too expensive to be worth the effort," said Schloesser, pointing to the need to use either lots of devices for scanning, or else long research periods. For example, the 2006 IPv4 Census conducted by the University of Southern California required four years' worth of data collection. Another effort, dubbed Internet Census 2012, illegally used 420,000 systems infected with the Carna botnet to scan about 660 million IP addresses and test 71 billion ports.

Today, however, purpose-built tools have allowed researchers -- with the right hardware -- to scan the Internet much more quickly. For example, Schloesser said, the ZMap network scanner, which is open source, can catalog every IPv4 address on the Internet in 45 minutes or less. Meanwhile, Errata Security CEO Robert David Graham's Masscan tool, which can generate 25 million packets per second, claims to be able to do the same job in just three minutes. "So this means that technically one can do Internet-wide scans with a single machine -- if the hardware is good enough," said Schloesser.

But scanning -- and Project Sonar -- is more than a technical endeavor. For one thing, port scans are often interpreted as a sign of attack, and the researchers involved in Project Sonar said that even after notifying hosting companies of the research they were undertaking, they frequently got locked out of their accounts.

"Scanning the entire Internet is bad," said Errata Security's Graham, in his Masscan release notes. "For one thing, parts of the Internet react badly to being scanned. For another thing, some sites track scans and add you to a ban list, which will get you firewalled from useful parts of the Internet."

Furthermore, when it comes to extracting useful vulnerability information, scanning the Internet -- or parts thereof -- turns out to be the easy part. "If we try to parse the data sets ourselves, even with a team of 30 people, it would take multiple years just to figure out the vulnerabilities in the data set," Moore told SecurityWeek. "It's ridiculous, really."

"The more time I spend on these scan projects, the more I realize how big the job is," Moore added. "The majority of the work isn't just figuring out the vulnerabilities themselves, but you have to identify all the affected vendors, identify the firmware versions, coordinate the disclosure process."

"It's a ton of backend work," he said.

About the Author

Mathew J. Schwartz

Contributor

Mathew Schwartz served as the InformationWeek information security reporter from 2010 until mid-2014.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights