In the fast emerging IoT, medical device safety is reaching a critical juncture. Here are three challenges Infosec professionals should begin to think about now.

Greg Shannon, Ph.D., chair, IEEE Cybersecurity Initiative & Chief Scientist, CERT Division, Carnegie Mellon University Software Engineering Institute

March 4, 2015

4 Min Read

Medical devices -- particularly those worn on or in the body -- are probably the most personal aspects of the emerging Internet of Things (IoT) and should be as secure and private as possible. Certainly, attention to this new challenge is well-warranted, given current events and trends.

Consider the scale of mortality in the medical device field in comparison to the advent of mass transportation via airplanes and automobiles in the early 20th century. As a recent, front-page article in The New York Times, “Hacked vs. Hackers: Game On,” noted:

“… the number of airplane deaths per miles flown … decreased to one-thousandth of what it was in 1945 with the advent of the Federal Aviation Administration in 1958 and stricter security and maintenance protocols.” Meanwhile, “there has been more than a 10,000-fold increase in the number of new digital threats over the last 12 years.”

Headlines that detail the breaching of public and private computer networks, the theft of data, the distribution of malware, and malicious computer and network attacks are legion. And there’s no reason to think that wearable or implanted medical devices are immune to this trend. In fact, another article from The New York Times, “A Heart Device is Found Vulnerable to Hacker Attacks,” established that a malicious intrusion into a combination heart defibrillator and pacemaker was indeed possible ( albeit expensive, computing-intensive and time-consuming) as far back as 2008.

Clearly, the operations and data relating to these devices must be protected for the patient’s health. But it’s also a legal requirement imposed by HIPAA, the Health Insurance Portability and Accountability Act of 1996. Thus our challenge crosses multiple domains, including medicine, computing, law and ethics, to name but a few.

Authentication and verification
One of the obvious challenges in this field, for example, is low-power and/or bi-directional authentication and verification. How do I know that the device’s data readout is authentic? How does the device know that it’s presenting data to an authorized user? Challenges of this sort are common in the cyber domain. But in a mobile, small form factor, underlying challenges include low power, limited processing and data storage, and air interfaces and protocols. This is essentially an engineering challenge that will be solved.

Another concern stems from legacy devices. When the Food and Drug Administration (FDA) approves a device, it is essentially approved forever. So whatever legacy device or software was in use at the time of approval continues in use. Though we can expect obsolescence and device turnover, we can also expect lag time during which those devices and/or software may be vulnerable. Security and privacy in a mobile medical device, as in other examples, are likely to be optimal when they’re tightly integrated with operations.

For security professionals, there are already a number of resources for raising industry awareness and increasing personal knowledge of IoT design best practices. An IEEE Computer Society Cybersecurity Initiative workshop in New Orleans, for example, focused on “Building Code for Medical Device Software Security” in a paper by Carl Landwehr, lead research scientist at the Cyber Security Policy and Research Institute (CSPRI) at George Washington University. The “A Building Code for Building Code” paper suggests known, effective measures for writing secure software, using medical devices as the first application. Although his specific prescriptions are hypothetical at this point, they make sense and are being explored.

I find the metaphor of “building codes” compelling because it captures the value of a standardized approach to security and privacy in medical devices, wearable or otherwise. One aspect of Landwehr’s approach is to incorporate programming language that integrates operational and security approaches.

In addition, the IEEE Cybersecurity Initiative’s Center for Secure Design’s recent paper, “Avoiding the Top Ten Software Security Design Flaws,” addresses issues that apply to mobile medical device design.

The evolution of health data
Not all solutions lie within the device itself, however. In order to use individual health data to draw conclusions about a population in general, a secure means of data sampling may also evolve. This approach is explained in “RAPPOR: Randomized, Aggregatable, Privacy-preserving Ordinal Response,” a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees that allows “the forest of client data to be studied, without permitting the possibility of looking at individual trees.”

An effort I was not involved in, but which demonstrates the broad interest in this topic, was a workshop held this past October: Collaborative Approaches for Medical Device and Healthcare Cybersecurity. The effort was sponsored by the Food & Drug Administration/Center for Devices and Radiological Health, the Department of Homeland Security/C3 Voluntary Program, and the Department of Health and Human Services/Critical Infrastructure Protection Program.

Any investments in meeting the challenges inherent in secure and private mobile medical devices will pay dividends with wide application to other less critical domains. But it’s important to understand that medical device security and privacy is a current concern, not a future one. While focused work is currently under way, these efforts will need continued support and attention. We have an exciting opportunity to meet a relatively new, evolving challenge. And complacency is not one of our options.

About the Author(s)

Greg Shannon

Ph.D., chair, IEEE Cybersecurity Initiative & Chief Scientist, CERT Division, Carnegie Mellon University Software Engineering Institute

Dr. Greg Shannon is chair, IEEE Cybersecurity Initiative and the Chief Scientist for the CERT(r) Division at Carnegie Mellon University's Software Engineering Institute, where his role is to expand the division's research results, impact, and visibility. Outside of CERT, Shannon influences national and international research agendas by promoting data-driven science for cybersecurity. Shannon is also a co-organizer for the DIMACS and IEEE Workshop on Efficient and Scalable Cyber-security using Algorithms Protected by Electricity (ESCAPE,<>). He cofounded the Workshop on Learning from Authoritative Security Experiment Results (LASER,<>).

Prior to joining CERT, Shannon was Chief Scientist at two startups working on statistical anomaly detection in sensor streams, the science of cybersecurity, and insider threats. In earlier positions, he led applied research and development in cybersecurity and data analysis at Lucent Technologies, Lumeta, Ascend Communications, Los Alamos National Laboratory, Indiana University, and his own startup company.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights