Skip to main content

Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient

Abstract Deep neural networks (DNNs) have had tremendous success in a variety of

statistical learning applications due to their vast expressive power. Most

applications run DNNs on the cloud on parallelized architectures. There is a need

for for efficient DNN inference on edge with low precision hardware and analog

accelerators. To make trained models more robust for this setting, quantization and

analog compute noise are modeled as weight space perturbations to DNNs and an

information theoretic regularization scheme is used to penalize the KL-divergence

between perturbed and unperturbed models. This regularizer has similarities to

both natural gradient descent and knowledge distillation, but has the advantage of

explicitly promoting the ne... (more)
Created Date 2019
Contributor Kadambi, Pradyumna (Author) / Berisha, Visar (Advisor) / Dasarathy, Gautam (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Subject Artificial intelligence / Computer engineering / Computer science / analog neural network / distillation / fisher information / kl-divergence / non-volatile memory accelerator / quantized neural network
Type Masters Thesis
Extent 83 pages
Language English
Note Masters Thesis Computer Engineering 2019
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS

  Full Text
1.7 MB application/pdf
Download Count: 163

Description Dissertation/Thesis