Modern computers expect a certain consistency in their operating environments. A nice, steady ticking of the electronic clock; smooth, consistent voltage to make everything run; and internal system temperatures that fall within a certain specified range. When their expectations aren't met, weird things can happen.
If those "weird things" happen because of unanticipated power fluctuations, it can be annoying. If they happen because a malicious actor intentionally manipulated power or other environmental elements, they can be the beginning of a devastating attack.
Glitching attacks are defined as attacks that involve causing a hardware fault through manipulating the environmental variables in a system. When power, high-temperature sensors, or clock signals are interrupted, the CPU and other processing components can skip instructions, temporarily stop executing programs, or behave in other ways that can allow attackers to slip malicious instructions into the processing gaps.
Glitching is most useful for systems that serve special purposes (like encryption), or those that are "headless" — IoT computers that don't have a standard user interface that can be manipulated by normal malware or social engineering techniques.
It's an outlier technique in the threat actor's toolkit, though. Glitching generally requires intimate knowledge of the hardware and software of the specific system under attack and it requires physical access to that system. It is, though, something that security professionals should know about, especially if they have IoT systems under their care.
It should be noted that glitching attacks are neither easy nor simple to pull off (although researchers recently made it easier by releasing chip.fail, a toolkit to bring glitching "to the masses"). The goal in glitching isn't simply to stop a system from running — that could be done by simply cutting power in most cases — but to gain access to the system's resources or damage its ability to effectively complete its given task, when a purely software approach isn't effective.
Timing's Leading Edge
Many glitch attacks are based on the shape of a signal. The electrical signals that move through a computer system tend to have sharp rises and drops. On an oscilloscope, the image is a series of square waves. The processor knows to start a new instruction when it detects a sharp rise in voltage — the "leading edge" of the wave. In a presentation given at Black Hat 2015, Bret Giller, a computer security consultant at NCC Group, provided steps for implementing an electrical glitching attack.
In his presentation, Giller points out that each instruction takes a certain amount of time to execute; the execution time and the timing of those leading edges are in sync. If an attacker can inject a leading edge into the circuit so that it arrives too soon, then the processor can be tricked into executing a new instruction before the previous instruction has finished, or into skipping instructions altogether.
This kind of glitching can involve a power spike or manipulating the system's clock by speeding it up (overclocking). Ricardo Gomez da Silva, a faculty member at Technische Universität Berlin Institut für Softwaretechnik und Theoretische Informatik, described these clock-glitching attacks and discussed how to protect against them in a paper published in 2014.
An attacker could gain access to the hardware and just inject stray signals to see what happens, but that's unlikely to be productive. Instead, as Ziyad Alsheri pointed out in a presentation given at Northeastern University in the fall of 2017, the attacker needs to have intimate knowledge of the processor, the overall system, and the software in order to know precisely when to inject the spurious signal and what to do with the brief burst of resulting chaos.
Glitching the Fall
While instruction execution is triggered by the leading edge of a signal, there are some operations, such as writing data to a memory location, that can be triggered by the sharp voltage fall on the trailing edge of a wave.
A drop in the voltage supplied to the system can eliminate the sharp drop that triggers operations. In his Black Hat presentation, Giller said these "brown out" glitches can be responsible for data corruption and lost information, among other consequences. This sort of data corruption attack can be valuable when the system under attack is responsible for encryption or authentication. Disrupting the data in one part of the process can weaken the entire process to the point that the protection is ineffective.
By now, it should be obvious that there are easier ways to hack most systems. The descriptions given in academic papers and research notes show a process that involves a great deal of research and physical access in order to compromise a single system.
However, researchers Thomas Roth and Josh Datko made it simpler and less expensive at Black Hat 2019, when they presented "Chip.Fail," research conducted with their partner Dmitry Nedospasov. Not only did they demonstrate their glitching (fault-injection) attacks on IoT processors, they did so using less than $100 of equipment. They released this toolkit and framework at the conference, so researchers can test chips' vulnerability to these types of attacks.
Nevertheless, glitching may never replace social engineering as a way into office productivity computers. So far, it is not even a huge factor in compromising embedded control systems in the real world.
Yet cybersecurity professionals should remain aware of its possibilities because academic research can become a real-world attack with the breakthrough of a single dedicated security research team.