Exploit Devs At Risk: The Nuclear Scientists Of The Next Decade?
Will a nations exploit developers become the potential targets of state-sponsored assassinations in the future, much like the nuclear scientists of the past century?
When news stories broke last month regarding the legitimacy of using lethal force against civilian hackers, I questioned what the future might hold for exploit devs and other members of the cybersupply chain who are facilitating state-funded, offensive cybercapabilities -- particularly when it comes to more belligerent regimes, such as Iran and North Korea. Are we inevitably set on a path where these individuals may be at the same level of risk that, say, Iranian nuclear researchers have been during the past few years?
As extreme as this might sound at first glance, parallels between nuclear proliferation and cyberconflict are often drawn, primarily due to the potentially paradigm-shifting nature of both technological advances.
More Security Insights
- Forrester Study: The Total Economic Impact of VMware View
- Securing Executives and Highly Sensitive Documents of Corporations Globally
- SaaS and E-Discovery: Navigating Complex Waters
- SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger
Nuclear scientists have been a hot commodity since Ernest Rutherford discovered that he could split an atom in 1917. Subsequent to Rutherford's discovery, history is littered with scientists being captured, killed, or even defecting to a foreign state, most recently including numerous slain Iranian scientists, as well as Shahram Amiri (another Iranian scientist), who was reported to have defected to the U.S. courtesy of the CIA in 2010.
While clearly nuclear and cyberplatforms are not the same, the high demand for individuals who are capable of building highly sophisticated and dependable cybercapabilities, coupled with the apparent desire by a growing amount of nation-states to gain superiority in this domain (otherwise known as an arms race), inevitably creates an environment similar to what has existed in the nuclear domain for much of the past century.
To be clear, we're not talking CISSP-donning "researchers" hell-bent on finding every possible XSS flaw in some open-source shopping cart that no one really cares about. I'm talking world-class engineers capable of not just identifying ground-breaking flaws in software and hardware platforms, but being able to articulate their research in a manner such that it culminates in a capability of scale and operational usefulness required by an advanced nation-state to carry out their mission.
Unlike nuclear weapons, whose effect remains relatively the same over time, as the IT enterprise becomes inevitably better at defense, the demand to maintain cybercapabilities (which achieve similar levels of penetration into the enterprise as we typically see during attacks today), will ultimately increase. Therefore, so will demand for the talent required to build those capabilities and, with that, the desire of competing nation-states to prevent their likely enemies from acquiring those capabilities.
An interest in disruption of our would-be enemy's capability supply chain has already begun, even in the public domain. The identification of individuals associated with the PLA's Unit 61398 earlier this year in various reporting relating to the Comment Crew attacks is a good example of this growing trend of a desire to attribute down to the level of the individuals employed within the state agencies responsible for a growing number of attacks.
Although I think we're likely far from the point where a nation would use lethal force against another's key cyberinnovators, deterrents are often an escalating path. They often begin with legal frameworks (something the U.N. is actively pursuing today), U.N. directives, and political negotiations -- and historically end with a motorcycle-riding, magnetic-bomb-clad assailant, as was the case with an Iranian scientist killed in early 2012.
Tom Parker is CTO at FusionX