What was once an infrequent arrangement practiced by well-intentioned white hat security researchers has become a market with norms that are being defined and enforced on a daily basis. In fact, vulnerability disclosure has become a lucrative career for determined independent researchers. The practice of "good faith effort" has gone from being the responsibility of the researcher to the obligation of the company.
Today, there is an expectation in the market for companies to be receptive to independent security research, and companies that react negatively are being portrayed as shortsighted and ineffective. In this newly evolving security research market, there are two drivers changing the old model.
The first is that researchers are expecting to be rewarded for their findings more frequently. Companies like Microsoft and Github are legitimizing the policies of Google and Facebook to pay for security vulnerabilities in a bug bounty program. (Facebook especially has seen a substantial increase in the rate of submissions.) Bringing this reward structure to the mainstream changes the expectation of security researchers in general.
The second driving change is a flood of inexperienced researchers joining the market to take advantage of the new economy. While, on the surface, this might seem like a negative change, experience shows us that this increase in numbers allows companies to tap into the powerful benefits of crowdsourcing. For relatively low costs, crowdsourced security testing produces better code coverage and realistic attack vectors.
These changes have created a new model in the disclosure market: "transactional disclosure." A transaction is a business arrangement where each side is equitably compensated in a simple, repeatable process. In the bug bounty market, a transaction is a monetary reward paid for submitting a security bug, and there are thousands of these transactions every year averaging small payouts of less than a few hundred dollars each.
More eyeballs & a new vocabulary
For experienced researchers disclosing serious, high-impact bugs, the process is well understood, but labor intensive. The expectation is that the effort to be rewarded will align with the reward. It usually involves a time-consuming process of revealing banking and tax information, along with other pieces of identity. In exchange for this effort, the reward is usually a large amount of money. With the coming generation of career bug bounty researchers making their money through higher volumes of lower-impact bugs worth less money, the effort on both sides is only worthwhile if the process for disclosure can be streamlined and efficient.
Researchers expect communication and validation in a timely manner, and, as a result, a shorthand vocabulary has evolved: A submission is either a "valid bug" or an invalid submission. A “duplicate” is a valid bug that is either already known about from a previous submission, or it’s part of the original list of known bugs set in the terms of the policy. A bug can be valid, and still not be rewarded because it does not carry a high enough impact for the company to justify fixing it. These are examples of the sorts of predictable activities that are creating the basis for the CERT Division of the Software Engineering Institute's Vulnerability Disclosure Policy.
This army of eyeballs is changing the dynamic between the security team and the developers when they are handing off lists of vulnerabilities basically found in the wild. The crowd, with its volume, feels more like the public eye. With that comes the expectation that companies can commodify their responses in the same way researchers are commodifying their high volume of lower-impact bugs. It demonstrates that a vulnerability disclosure no longer lives in a vacuum, and when a researcher chooses to disclose, he or she brings to the table expectations shaped by every previous experience disclosing bugs to other companies.
The biggest sign that the model for vulnerability disclosure has changed is the emergence of new crowd-sourced security companies like Bugcrowd, where I work as a community manager for a crowd of over 8,000 security researchers. These third parties are streamlining the disclosure and payment process to the level of easy transactions. Researchers are able to use the same payment information across dozens of sites when testing for bugs, and are able to justify the effort more effectively. Plus, there is an added bonus of a layer of abstraction and anonymity between the researcher and the target company.
Bug bounties are becoming big business, and these crowdsourcing services are iterating the transactional model to make it as easy as possible to get as many bugs as possible out of the wild. Companies need to be aware that there is a new community of security researchers with evolving expectations about vulnerability disclosure. Those companies that do not stay in front of this trend may end up, not only with a breach on their hands, but with a lack of public sympathy as well.