Security researchers walk a fine line between white hat and black hat activities. Sometimes despite being on the right side of the line, the legal side of the line, they still find themselves facing criminal charges. Consider the case of Justin Shafer: he found a security hole in a dentist office’s servers, and reported the incident to the company.
While some companies would have paid Shafer a ‘bug bounty,” he was unfortunate enough to find a hole at a company that doesn’t understand what security researchers actually do. By reporting the hole, he basically implicated himself as a cybercriminal and now he is facing criminal charges for “exceeding authorized access.”
As if security researchers didn’t have enough reason to worry about being seen as criminals, we now have bug poachers confusing matters even further. According to information from IBM, bug poachers have hit at least 30 companies. Bug poachers breach a company’s infrastructure, typically using a SQL injection aimed at a vulnerability in a company’s website. Once inside, they steal data, but here is where the twist comes in. Unlike typical black hatters, instead of selling the data, bug poachers extort their victims—telling the company they must pay to get information on how they were breached.
The bug poachers argue that they are doing companies a service. They are making companies aware of potentially harmful vulnerabilities in their systems. The vulnerabilities they exploit are publically known and have patches. They would be security researchers if they would stop once they pointed out the vulnerability. But they’re not because of their actions after a flaw is found.
Researchers publish their findings after the company has had a chance to fix the vulnerability. They most certainly do not request funds for information or threaten to actively exfiltrate data. Poachers, on the other hand, are extortionists taking advantage of a well-established yet often unrecognized fact: applications are inherently insecure.
Why Poachers Are Taking Advantage
Software isn’t designed with cybercriminals in mind; it is designed and composed with functionality as the main goal. As a result, we have design flaws, the use of vulnerable open source components, idiosyncrasies in programming languages and other insecure coding practices contributing to a large number of vulnerabilities. Research from Veracode has shown that 70% of all applications have at least one vulnerability in the OWASP Top 10 upon first scan.
This astronomical number of vulnerabilities leaves us dependent on the kindness of security researchers to help us find vulnerabilities before they are exploited. And that’s why so many companies have enacted bug bounty programs. Instead of punishing researchers for finding and responsibly disclosing vulnerabilities, bug bounty programs reward researchers for their work. This way, these talented individuals are not tempted to use their skills to make money in illegal ways—and there are plenty of illegal activities they could chose to do instead of responsibly disclosing vulnerabilities.
Stopping the Problem at the Source
But companies shouldn’t depend on the kindness of strangers (security researchers). Instead they need to take responsibility for their software and do the best they can to find vulnerabilities before applications are in production. Yet, according to the biennial Global Information Security Workforce Study published by (ISC)2, 30% of companies never scan for vulnerabilities in their software. No wonder we are seeing so many breaches, ransomware attacks, and now bug poaching. The proliferation of vulnerable software is making it too easy for cybercriminals to be successful. It is too lucrative of an opportunity for many talented hackers to ignore.
What can be done? The first action companies can take is to assess software for vulnerabilities during the development stage of the software lifecycle. But the software lifecycle doesn’t end at the development stage, and neither should security efforts. A shifting security landscape means new vulnerabilities are found all the time, and if a development team uses third-party and open-source components in their engineering efforts—and most do—it is possible to have a complete secure development process and still end up with vulnerabilities. This is why protecting applications in production is just as important as eliminating vulnerabilities to begin with.
A bug bounty program can go a long way toward attracting the right kind of probing into a company’s applications. And security researchers have done a lot to help companies fix vulnerabilities before the world finds out about them. But as this new wave of black hat hackers known as bug poachers demonstrates, there are still too many creative and talented hackers out there who are more than comfortable occupying the gray and sometimes black space of cybercrime. Let’s not make their job too easy.
- 'Hack The Pentagon' Paid 117 Hackers Who Found Bugs In DoD Websites
- Revealing Lessons About Vulnerability Research
- Vuln Disclosure: Why Security Vendors & Researcher Don’t Trust Each Other