HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks. (arXiv:2106.05825v1 [cs.CR])

DNNs are known to be vulnerable to so-called adversarial attacks, in which
inputs are carefully manipulated to induce misclassification. Existing defenses
are mostly software-based and come with high overheads or other limitations.
This paper presents HASI, a hardware-accelerated defense that uses a process we
call stochastic inference to detect adversarial inputs. HASI carefully injects
noise into the model at inference time and used the model’s response to
differentiate adversarial inputs from benign ones. We show an adversarial
detection rate of average 87% which exceeds the detection rate of the
state-of-the-art approaches, with a much lower overhead. We demonstrate a
software/hardware-accelerated co-design, which reduces the performance impact
of stochastic inference to 1.58X-2X relative to the unprotected baseline,
compared to 14X-20X overhead for a software-only GPU implementation.