Skip to main content

Detecting Adversarial Examples by Measuring their Stress Response


Abstract Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction confidence. Works on defending against adversarial examples can be broadly classified as correcting or detecting, which aim, respectively at negating the effects of the attack and correctly classifying the input, or detecting and rejecting the input as adversarial. In this work, a new approach for detecting adversarial examples is proposed. The approach takes advantage of the robustness of n... (more)
Created Date 2019
Contributor Sun, Lin (Author) / Bazzi, Rida (Advisor) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Subject Computer science / adversarial attack / adversarial examples / defense / DNNs
Type Masters Thesis
Extent 64 pages
Language English
Copyright
Note Masters Thesis Computer Science 2019
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS


  Full Text
6.8 MB application/pdf
Download Count: 11

Description Dissertation/Thesis