Skip to main content

Joint Optimization of Quantization and Structured Sparsity for Compressed Deep Neural Networks


Abstract Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.

To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.

This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specif... (more)
Created Date 2018
Contributor Srivastava, Gaurav (Author) / Seo, Jae-Sun (Advisor) / Chakrabarti, Chaitali (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Subject Artificial intelligence / Computer engineering / Computer science / Deep learning / Deep Neural Networks / DNN quantization / DNN structured sparsity / DNN weight memory / Pareto-optimal
Type Masters Thesis
Extent 64 pages
Language English
Copyright
Note Masters Thesis Computer Engineering 2018
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS


  Full Text
7.8 MB application/pdf
Download Count: 128

Description Dissertation/Thesis