The medical device industry has transformed over the last decade, driven by an explosion in the Internet of Mobile Things and increased connectivity. As complexity around the technology, supply chains, and management of these devices grows, so have security concerns. Traditionally benefiting from no connectivity, or security through obscurity, today's medical devices are complex systems with multiple layers of commodity-based hardware and software. As a result, medical devices today are more vulnerable to generic threats that target mainstream software libraries and operating systems like Windows and Linux. In fact, according to the "Healthcare Breach Report 2021," medical device attacks increased by 55% in 2020.
As the threat landscape continues to grow and get more complex, the medical device industry has been working to evolve how it addresses security efficacy. From a cybersecurity perspective, this takes forms such as threat modeling, which can determine the unique risk profile of a medical device. That unique risk profile then informs the design and implementation of security controls to lower those risks (and get approval from the Food and Drug Administration). These sorts of security controls are typically rooted in software.
But today, new microprocessor technologies (such as secure enclaves and cryptography acceleration) enable hardware to play a more prominent role in medical device security. How could a shift to more hardware-based security controls help in these devices?
During the last decade, medical devices often used custom operating systems or simply ran on bare metal, which gave them security through obscurity. But with the maturation of these devices, there's been a massive shift to commodity operating systems and commodity communication libraries. While wild stories of medical device attacks may steal the headlines, in reality, commodity-based vulnerabilities pose the biggest threat to medical device security today.
Manufacturers of medical devices often focus security efforts around locking down their proprietary software, which is essential but leaves other software layers exposed. As the industry matures, there is as growing concern that if security controls exist only in software, they can be undone in that same software. This revelation is driving the move of certain software functions (or variables) into hardware roots of trust where they can be better protected and signed. Let's look at two examples that I've worked with.
First, inhalers. A big problem with systems that use consumables, such as inhaler systems or lab test equipment, is counterfeit or refilled consumables/cartridges. Much like printers, these systems generate their income through the consumables (such as the inhaled drug) rather than the inhaler itself. Security solutions at the software level were being reverse engineered, allowing for knockoff and refilled cartridges. Both posed health risk to patients, but also a sizable monetary loss for the manufacturer.
Manufacturers needed to figure out how to move the anti-counterfeit and anti-tamper security down to an immutable layer, the hardware level. The solution used cryptography keys rooted in hardware, burned in at manufacturing to verify authenticity of each cartridge, and then leverage one-way hardware counters to track remaining dose counts. These controls eliminated the ability of a spent cartridge to be refilled (since the remaining dose counter could not be increased) and the use of unauthentic cartridges from being accepted by the system.
Another area is the debugging capabilities of medical devices. Some security professionals would prefer all debugging capabilities (for example, JTAG and SPI) be removed from these devices. But right now, those supporting the devices for manufacturing and service use them for access. An excellent example of this is in prescription medical devices —something that is prescribed to a patient, then returned for use by a different patient. This can include in-home devices, such as sleep study equipment, diabetic monitoring, mobile EKGs, and more. After use, the device often goes back to the manufacturer to be refurbished and reset, leveraging the debugging ports to fully reflash the system, as though it were going through manufacturing again.
However, simply resetting configuration at the software application level potentially misses the risk of tampering that might have extended beyond the patient configuration (such as manipulation of boot parameters, BIOS settings, system identifiers, network information, and enabled OS services). The more secure solution is to use the debug ports to essentially reflash the device as though it's going through initial manufacturing (trust nothing on the system). Often this process involves newly provisioned crypto keys because the state of the current ones is unknown.
But what if we went a step further, the root of trust was further pushed to the hardware layer, so even though a device was potentially in the hands of a malicious patient, it couldn't fundamentally be altered? Or crypto keys couldn't be manipulated or extracted? This is where hardware root of trust, and capabilities such as trusted platform modules (TPMs) could help shift away from needing to leave debugging ports open.
A lot of great growth has happened in the medical device security space over the last few years. As it continues to grow and evolve, it will be important to shift security lower into the hardware and firmware layers. To make this a reality, manufacturers and their technology partners are working together to collaborate on new solutions.