The true value of crowdsourced security lies in the diversity of knowledge, experience, and bug-hunting methods employed by security researchers. Additional value is oftentimes further defined by differing cultures, perspectives and backgrounds depending on geographic location. Bugcrowd’s crowdsourced bug-bounty program, for example, is quite diverse, with participating researchers from no fewer than 112 countries organized into several regions.
While there may be a lot of differences between each of the 112 countries, the top researchers – many of them described as ‘Super Hunters’ — from each region appear to share a common goal: to find the most, high impact valid bugs before a bad guy does.
To underscore the value these individuals bring to the cybersecurity table, this slide show will provide seven profiles for the top-ranked Bugcrowd researchers, selecting one from each of the top-submitting regions, chosen by the largest volume of bug submissions.
To help understand the data presented with each researcher, refer to the following definitions:
- Acceptance Rate: Best explained as a comparison of valid to invalid reports.
- Average Priority: When taken in context with a researcher’s rank and Acceptance Rate, this can help recognize outstanding researchers who consistently submit high impact vulnerabilities, but may be lower volume in their submissions.
- Kudos Points: These are intended to recognize researchers for their valid vulnerability reports, independent of monetary or swag prizes associated with the bounty program. The more severe the vulnerability impact, the greater the points awarded (from 5 to 20).
This presentation is a precursor to a new report being developed by Bugcrowd which will take a look at the psychology of bug hunters, what motivates them, and why the researchers look very different from one another.
Before we begin, imsmartin would like to thank the Bugcrowd team for making this information available to our team.