image image
Peter Horvath at Hardwear USA 2024

Peter Horvath

BarraCUDA: GPUs do Leak DNN Weights

Talk Title:

BarraCUDA: GPUs do Leak DNN Weights


Over the last decade, applications of deep neural networks (DNNs) have spread to various aspects of our lives. A large number of companies base their businesses on building products that use DNNs for tasks such as face recognition, machine translation, and self-driving cars. Neural nets are also being used in safety and security-critical applications like high definition maps and medical wristbands, or in globally used products like ChatGPT and Google Translate. Much of the intellectual property underpinning these products is encoded in the exact parameters of the DNNs. Consequently, protecting these is of utmost priority to businesses. At the same time, many of these products need to operate under a strong threat model, in which the adversary has unfettered physical control of the product.

Past work has demonstrated that with physical access, attackers can reverse engineer neural networks that run on microcontrollers and FPGAs. However, for performance reasons, neural networks are often implemented on highly-parallel general purpose graphics processing units (GPGPUs), and so far, attacks on these have only recovered course-grained information on the structure of the neural network, but failed to retrieve the detailed weights and biases.

In this work, we present BarraCUDA, a novel attack on GPGPUs that can extract the parameters of DNNs running on the popular Nvidia Jetson Nano device. BarraCUDA uses correlation electromagnetic analysis to recover the weights and biases of real-world convolutional neural networks in a highly parallel and noisy environment.

Speaker Bio:

Peter Horvath is interested in everything related to reverse engineering and hacking. My research focus is attacking different algorithms such as neural networks and cryptography via physical side channels.