Meeting Register Page

Practical Hardware Attacks on Deep Learning By Sanghyun Hong | Hardwear.io Webinar
Abstract:
----------
The widespread adoption of machine learning (ML) incentivizes potential adversaries who wish to manipulate systems that include ML components. Consequently, research in adversarial ML studies attack surfaces such as predictions (manipulated by adversarial examples) or models (manipulated by malicious training data). However, most of the prior work concerns ML as an isolated concept, and it overlooks the security threats caused by practical hardware attacks such as fault injection or side-channels.

In this talk, we will present a new perspective to study the threats: we view ML as a computational tool running on hardware, a potentially vulnerable body. We will then introduce our emerging research on the vulnerabilities of ML models to practical hardware attacks. First, we will review the impact of a well-studied fault-injection attack, Rowhammer. Second, we will discuss the impact of information leakage attacks, such as side-channel attacks. Those attacks can inflict unexpected damages, and ultimately, they shed new light on the dangers of hardware-based attack vectors. We will conclude by emphasizing the vulnerability of ML to hardware attacks is as yet an under-studied topic; thus, we encourage to re-examine security properties guaranteed by previous works with a new angle.

You can find more about our research at http://hardwarefail.ml/.

Speaker Biography:
----------------------
Sanghyun Hong is a Ph.D. candidate in the Maryland Cybersecurity Center (MC2) at the University of Maryland-College Park (UMD), advised by Prof. Tudor Dumitras. Sanghyun's research interests lie in computer security and machine learning (ML). In his dissertation research, he exposed the vulnerabilities of deep learning systems to practical hardware attacks.
Meeting is over, you can not register now. If you have any questions, please contact Meeting host: hardwear.io.