It’s not clear why a dozen FBI agents showed up at a security researcher’s door last month but as cyber becomes more a factor in product safety, our judicial system needs to get a better grasp on who the real criminals are.

Adam Shostack, Leading expert in threat modeling

June 10, 2016

4 Min Read

I want to juxtapose several articles I read last Friday morning. The first is “Bug Poaching: A New Extortion Tactic Targeting Enterprises;” the second is “FBI raids dental software researcher who discovered private patient data on public server;” and the third is “Smart Meter Companies Sue Local Activist and City to Block Disclosure of Security Audits.”  (I should mention that I sit on the board of the Seattle Privacy Coalition with that local activist, Phil Mocek.)

Frankly, the “extortion” article, published on IBM’s Security Intelligence website,  presents a lot of opinion as if it were fact, and I'll both add emphasis and comments as I quote:

This is all being done under the disguise of pretending to be a good guy when, in reality, it is pure extortion on the black hat scale [AS: what do those last three words even mean?]. The attack is carried out by criminals [AS: what court has convicted them?] pretending to want to do something good for the organization but demanding payment for doing so...

Now, I don't think asking for money without a contract is reasonable; it strikes me as an electronic equivalent of “squeegee men” who run up to cars, spray the windshield, and then get threatening if they don’t get paid. But it could also be a failure of communication. The IBM blog post doesn't include entire messages, but it's reasonable to think some of them come from people whose first language is not English.  It might also be that the researchers think, however reasonably, that they did some work and they deserve some payment, and that their thinking is informed by analogies to a bug bounty.

Let me be clear. Extracting data from a live system crosses an ethical line, and I think that Alex Stamos did a great job of laying out those lines in his post about “Bug Bounty Ethics.” (Sometimes extracting data about yourself or a friend who has consented in advance is key to seeing if an issue is real.)

But I don’t want to focus on the writing, however tempting it is.

More important is that this sort of bombastic writing, carried out under IBM's logo, carries weight, and relates to situations like security researcher Justin Shafer being raided by the FBI while trying to do the right thing (apparently). It also seems to relate to the Seattle City Light case. In the case of meters that will be connected to hundreds of thousands of houses, if there's a security problem that can be found by reading design documents, then that's a serious violation of Kerkhoffs’ Principle.

Reverse-engineer the meters

Whatever exact problems are known to the vendors will likely come to light when people start reverse-engineering the meters. Some commercial organizations might not want to see their products scrutinized, but this is an unrealistic goal. The commercial reality is, and has been, that your product will be scrutinized by security reviewers. Those reviewers will look at a variety of characteristics. If you can’t stand to see your product reviewed, then for what purpose do you believe it is fit?

It is tempting to assign equivalence, and assert that researchers and companies need to behave more responsibly. This is a trap that we should avoid. We should expect organizations, talking to their lawyers and deciding on a corporate course of action to behave in a more thoughtful fashion than we should expect of an individual. They’re not equivalent. A company will almost certainly spend a smaller proportion of its resources than an individual will spend.  (That is, a small $10m/year company spending 1% of its turnover on lawyers spends 100K, a security pro making 200K spends half their annual income to match that.) They’re not equivalent.

Now, there's another view, which is that many researchers will find more issues in their careers than companies will have reported to them. As such, we should expect better behavior from the average researcher than we see from the average company.

And, in fact, we do.  In the tens thousand or so vulnerability reports filed last year, I believe that most were coordinated in some way. Relatively few were dropped as 0day.  Even fewer were intended or confused for ransom attempts.

It’s not yet clear why a dozen agents showed up at Shafer’s door, but what is obvious is that there was a discussion which proceeded that raid. We should expect better of the FBI. Much like researchers will handle many issues, we are approaching the point where we can expect every FBI office to have dealt with cyber issues.  We should expect that the case selection process includes questions like “is there objective evidence of criminal intent here?” (For example, did Shafer demand money of HenryShein Dental?)

We should expect better of our courts, our laws, and those that enforce them.

Related Content:

 

About the Author(s)

Adam Shostack

Leading expert in threat modeling

Adam Shostack is a leading expert on threat modeling. He's a member of the BlackHat Review Board, and helped create the CVE and many other things. He currently helps many organizations improve their security via Shostack & Associates, and helps startups become great businesses as an advisor and mentor. While at Microsoft, he drove the Autorun fix into Windows Update, was the lead designer of the SDL Threat Modeling Tool v3 and created the "Elevation of Privilege" game. Adam is the author of Threat Modeling: Designing for Security, and the co-author of The New School of Information Security. His personal home page can be found here

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights