New Side-Channel Attacks Target Graphics Processing Units
A trio of new attacks bypass CPUs to wring data from vulnerable GPUs.
November 7, 2018
A new brand of side-channel vulnerabilities has been disclosed and this time it's not the CPU that's under attack: it's the GPU.
New exploits published by computer scientists at the University of California, Riverside, leave both individual users and high-performance computing systems at potential risk. The three sets of exploits pull sensitive data out of a graphics processing unit core, and do so with relative ease, compared to some of the side-channel attacks that have been demonstrated on CPUs.
Two of the attacks target individual users, pulling information on website history and passwords. The third could open the door to an organization's machine-learning or neural network applications, exposing details about their computational model to competitors.
The researchers' paper, Rendered Insecure: GPU Side Channel Attacks are Practical, was presented at the ACM SIGSAC conference, and the vulnerabilities have been disclosed to Nvidia, Intel, and AMD.
The first two attacks take advantage of the cores in a GPU communicating in parallel to complete a workload. Knowing about that communication means that, "…if we coordinate it right then we achieve really high bandwidth so that we can block out noise," says Nael Abu-Ghazaleh, professor of computer science and engineering, and of electrical and computer engineering at the university.
The basic attack technique works like this: "There is a victim process and then there's somebody else was spying on it through leakage in the caches or other shared resources," he says. The fact that all the cores share certain resources means that the attacker doesn't have to figure out which core is running a particular thread, greatly simplifying the attack.
An attack on the API dealing with memory allocation for the GPU cores allows an attacker to ultimately figure out which websites have been visited in a process Abu-Ghazaleh describes as website "fingerprinting." If the point of attack is memory allocation based on keystrokes entered by the user, then "with well-known attacks on timing you can actually figure out with high certainty what are the candidate passwords and quickly get to the point where you can crack the password," he says.
Vulnerable Intelligence
The vulnerability that affects machine learning applications depends on understanding certain counters that are actually designed to make programming a GPU easier.
"Things can go really wrong when you're writing GPU code because it's very sensitive to memory access patterns and so on," Abu-Ghazaleh says. "The counters are provided to give this insight, and they're accessible from user mode," he explains. If a spy process can watch these counters, it can gain incredible insight into the processes that are running.
In the attack, workloads are sent to the GPU concurrently with the victim workload in order to cause stress and the resulting update of the counters. Within the GPU, there can be more than 200 of these counters keeping track of various performance aspects, so the picture of what is happening can become quite clear.
Abu-Ghazaleh says that the ultimate danger of these attacks would be in a shared GPU-compute configuration, such as in a cloud-based machine learning environment.
Turning off user mode access to the counters can defend against the third attack but would also break many existing applications that depend on the functionality. Nvidia has not yet released patches for the vulnerability, but Abu-Ghazaleh says that he understands patches to be in the works.
As of presstime, Nvidia had not responded to Dark Reading for comment.
Related Content:
Black Hat Europe returns to London Dec 3-6 2018 with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.
About the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024