ASU Electronic Theses and Dissertations
This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.
In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.
Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at firstname.lastname@example.org.
- 4 English
- 4 Public
- Computer engineering
- 3 Electrical engineering
- 2 Artificial intelligence
- 2 Computer science
- 1 Coarse Grain Reconfigurable Arrays
- 1 Compilers
- 1 Computer Vision
- 1 Control flow
- 1 Convolutional Neural Networks
- 1 FPGA
- 1 Hardware Accelerator
- 1 High performance ASIC
- 1 Microarchitecture
- 1 Minimum energy
- 1 Multi-input Flip-flop
- 1 Non-volatile
- 1 Reconfigurable
- 1 Threshold logic
- 1 analog neural network
- 1 distillation
- 1 energy efficient
- 1 fisher information
- 1 high performance
- 1 kl-divergence
- 1 non-volatile memory accelerator
- 1 quantized neural network
Static CMOS logic has remained the dominant design style of digital systems for more than four decades due to its robustness and near zero standby current. Static CMOS logic circuits consist of a network of combinational logic cells and clocked sequential elements, such as latches and flip-flops that are used for sequencing computations over time. The majority of the digital design techniques to reduce power, area, and leakage over the past four decades have focused almost entirely on optimizing the combinational logic. This work explores alternate architectures for the flip-flops for improving the overall circuit performance, power and area. It …
- Yang, Jinghua, Vrudhula, Sarma, Barnaby, Hugh, et al.
- Created Date
The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency of GPU and other general-purpose platform, bringing opportunities for specific acceleration hardware, e.g. FPGA, by customizing the digital circuit specific for the deep learning algorithm inference. However, deploying CNNs on portable and embedded systems is still challenging due to large data volume, intensive computation, varying algorithm structures, and frequent memory …
- Ma, Yufei, Vrudhula, Sarma, Seo, Jae-sun, et al.
- Created Date
Coarse Grain Reconfigurable Arrays (CGRAs) are promising accelerators capable of achieving high performance at low power consumption. While CGRAs can efficiently accelerate loop kernels, accelerating loops with control flow (loops with if-then-else structures) is quite challenging. Techniques that handle control flow execution in CGRAs generally use predication. Such techniques execute both branches of an if-then-else structure and select outcome of either branch to commit based on the result of the conditional. This results in poor utilization of CGRA s computational resources. Dual-issue scheme which is the state of the art technique for control flow fetches instructions from both paths of …
- Rajendran Radhika, Shri Hari, Shrivastava, Aviral, Christen, Jennifer Blain, et al.
- Created Date
Deep neural networks (DNNs) have had tremendous success in a variety of statistical learning applications due to their vast expressive power. Most applications run DNNs on the cloud on parallelized architectures. There is a need for for efficient DNN inference on edge with low precision hardware and analog accelerators. To make trained models more robust for this setting, quantization and analog compute noise are modeled as weight space perturbations to DNNs and an information theoretic regularization scheme is used to penalize the KL-divergence between perturbed and unperturbed models. This regularizer has similarities to both natural gradient descent and knowledge distillation, …
- Kadambi, Pradyumna, Berisha, Visar, Dasarathy, Gautam, et al.
- Created Date