When good technology is designed for ostensibly good organizations but they use it for bad purposes, who's to blame? And if those bad practices aren't prevented now, will they soon become worse, or even accepted?
This debate over Pegasus is very important. Pegasus is marketed as a cyber-intelligence solution that helps law enforcement and intelligence agencies covertly extract information from any mobile device — most often by exploiting vulnerabilities in the firmware to run an agent, which allows extraction of data, captures images, tracks movement, and records all communications and interactions. It does this with a degree of operational security, which frustrates detection and attribution, supporting clandestine activities.
Inevitably, the technology has evolved and been enhanced since it hit the market. Pegasus can now exploit vulnerabilities in widely used mobile apps, and we can expect the innovations to keep coming.
Things happen fast in our industry: The journey from outlandish to outlier to mainstream (and even legacy) can be a short one indeed. If history is any guide, what's happening in the present gives us an idea of what might happen in the future, and that prospect is troubling.
While the company behind Pegasus, NSO Group, doesn't describe the technology as spyware, almost everyone else does: It is software fundamentally intended to secretly obtain information from a particular person or entity. The very intent is malicious, with its controversy since creation due to the way that nations that may not otherwise be able to build such a capability now can easily buy it.
NSO Group stresses that the purpose of Pegasus is defense — it helps guard against evildoers of all stripes. It meets the "challenges of encryption" during terrorism and other investigations. As such, it's a critical tool in modern guerilla warfare and info-terrorism.
But the startling revelations unveiled by Amnesty International and the consortium of journalists known as Forbidden Stories showed that targets of Pegasus have included not only criminals but also human rights defenders, hundreds of journalists from at least two dozen countries, numerous heads of state, diplomats, and political rivals. And alongside the legitimate users making unsavory use of the technology, it's also been shown that it was deployed against journalists investigating drug cartels.
This gives rise to a host of ethical and moral questions around spyware's impact on the world that demand answers. First, in cases where human rights are endangered, should organizations like NSO be expected to have visibility into their customers' activities?
It may be unrealistic to expect any producer — particularly a software developer — to have full transparency. However, vendors can know more as "know-your-customer controls" have shown in various sectors. These vendors can actively seek to prevent abuse and tighten the distribution process to ensure solutions are sold only to public entities that can be held accountable for systemic misuse. In this case, to which governments and other entities has the software been sold? What was the intended purpose, and did the subsequent use fall outside those parameters? Did NSO Group learn about the misuse, and what did it do about it?
Second, should the industry govern organizations (a global equities process, if you will) that identify and exploit vulnerabilities without reporting them for remediation? In a digitally interconnected global economy, this is most likely not practical, although China recently passed laws in an attempt to do this. However, we must accept and assume as defender pervasive vulnerability and exploitation. Simultaneously, that means focusing cyber defense on workable outcomes as opposed to stemming proliferation.
Next, as with any commercial undertaking, follow the money. Should for-profit entities have to disclose ethically dubious customer use in their corporate ESG reporting?
This is definitely a case where the old "it was just business" defense really doesn't cut it. Any company selling technology that could be used for cyber-offense activities has at the very least obligations under know-your-customer mandates. There are also nonproliferation controls, such as the Wassenaar Arrangement, which a number of nations have signed — but not all, and enforcement is challenging if indeed possible in the real-world.
Corporate governance doesn't end when the deal closes. Revenue and future profitability tied to programs like Pegasus are supposed to be based on contracts that guide usage. Profiting from illegitimate use, even unknowingly, should be off-limits.
Finally, if the current uses gradually gain acceptance, how much worse will things get in 10 years? Will spyware-as-a-service become an investor darling, with new startups drawing big private equity and venture capital funds for tools that enable even more efficient and effective cyber intrusions?
NSO Group wasn't the first and definitely won't be the last: We've seen earlier incarnations of commercial spyware used in different regions, and sadly, we're going to see a lot more. Intelligence agencies, law enforcement, and state proxy actors alike understand the value of offensive cyber capabilities, and they want it to fight criminals and terrorists. Unfortunately, the criminals want it too.
This genie is not going back in the bottle. Spyware and other cyber-offensive solutions will become more efficient, more affordable, more diversified, and more prevalent. It's up to every constituency — developers, technology professionals, policymakers, intelligence agencies, regulators, and more — to ensure that the technology is accessible to those who don’t operate in a tyrannical manner and that they use it only in the pursuit of global security and stability.