Tech News and Analysis

5 min read

Protecting Field Programmable Gate Arrays From Attacks

FPGAs can be part of physical systems in the aerospace, medical, or industrial fields, so a security compromise can be potentially serious.

If you've talked on a cell phone, browsed the Internet, or watched a show on 4K TV, you've benefited from field programmable gate arrays (FPGAs). Other examples of FPGAs at work include Google search algorithms returning results quickly and self-driving cars distinguishing between a speed bump and debris on the road. These integrated circuits, which are made to process large amounts of data, will be important for future technologies, such as self-driving cars, aerospace, high-performance computing, medical systems, and artificial intelligence/machine learning. As data processing shifts from the edge to the core, the performance and security demands on FPGAs have expanded.

These devices are designed to be configured after the initial semiconductor manufacturing process is complete. FPGAs can be customized to accelerate key workloads and enable design engineers to adapt to emerging standards or changing requirements. They contain an array of programmable logic blocks, as well as a hierarchy of reconfigurable interconnects that allow blocks to be wired together to process specific functions.

FPGAs are meant to be versatile, customizable, and powerful, but this can be a double-edged sword. Because bitstreams can access low-level functionality, if an adversary is able to successfully hack into the chip, they can install backdoors for later access, alter how the FPGA functions, or physically sabotage the system. An example of this in the wild is the 2020 Starbleed vulnerability affecting Xilinx 7-series devices, which allowed attackers to load their own malicious code onto vulnerable devices.

While FPGAs have not been the usual target of broad attacks, today's threat landscape is shifting quickly, requiring these devices to be versatile and offer security equivalent to their platform counterparts. Given that FPGAs can be part of physical systems in the aerospace, medical, or industrial fields, any security compromise can be potentially serious. A bitstream under attack could potentially lead to implanted backdoors and unexpected functionality, which, depending on the device, could lead to physical damage to the system, users, or infrastructure.

How to Secure FPGA Hardware
Fortunately, today's FPGAs are incorporating well-known industry security features, such as bitstream encryption, multifactor authentication, platform attestation, and key storage. Here, I focus on three lesser-known features: side-channel attack protection, anti-tampering, and anti-cloning.

  • Side-channel attack protection: Traditional side-channel attacks are passive attacks that leak secrets by observing the functionality of a system. FPGAs use a variety of methods to limit side-channel leakage and attack surfaces. For instance, certain features are able to protect key and confidential data under nonintrusive attacks.
  • Anti-tamper technology: Hardware exploitation often requires a physical attack. Ideally, the FPGA should be able to detect an attack and protect itself in some way — for example, by resetting sensitive keys. Anti-tamper technology monitors system characteristics, such as voltage, temperature, and internal clocks. These measurements can be used to determine whether the system has been potentially compromised or is operating in an unexpected way. Once a potential tamper event is detected, the system can respond to the event by resetting the device, disabling specific features, or clearing sensitive cryptographic assets.
  • Physically unclonable functions (PUFs): PUFs generate a device-unique, unclonable key that designers can use for device authentication and key wrapping. It's used during the configuration process for key protection and key material generation or for device identification purposes. PUF technology is traditionally based on a unique, unclonable SRAM initialization pattern. It relies on dedicated hardware sources of entropy, but it also has a software-updatable algorithmic component to address changes and fixes resulting from longer lifetime characterization activities. PUF technology provides a firmware-based methodology for algorithmic tuning and optimization as more characterization data becomes available.

    The point of PUFs is to extract unique silicon fingerprints from inherent device process variations. These variations are hard to reproduce or clone. Not even the manufacturer can reproduce them. However, at the physical level, they exhibit a very consistent and unique behavior. These unique silicon fingerprints are then used as the entropy source to generate the chips' cryptographic root key.

When these features are available, they are typically built into FPGA solutions. However, as with most firmware and hardware security features, enterprise security teams need to make sure that the features are both enabled and properly configured so they can provide the expected protections.

Advanced Hardware Needs Advanced Protection
The ability to provide advanced security in FPGAs is crucial as these devices are used in increasingly sensitive applications. For example, with data centers in the cloud and rapidly evolving search, new cloud-based architectures and changing workloads are leveraging FPGAs to develop flexible, scalable applications. But both private and public cloud data from customers and internal datasets need to be protected from unauthorized access and modification. Features like side-channel attack protection, anti-tampering, and anti-cloning help FPGAs provide hardware-enforced isolation, identity management, and accelerated authentication.

Hardware and firmware require the same attention and level of scrutiny from security researchers and academics as higher levels of the stack receive. This applies to FPGA devices, which continue to push the state of the art in diverse industry sectors.