Skip to main content

On-Chip Learning and Inference Acceleration of Sparse Representations

Abstract The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices.

While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to remain competitive.

Consequently, state-of-the-art mobile platforms have become highly heterogeneous by combining a powerful CPUs with GPUs to accelerate the computation of deep neural networks (DNNs), which are the most common structures to perform ML operations.

But traditional von Neumann architectures are not optimized for the high memory bandwidth and massively parallel computation demands required by ... (more)
Created Date 2019
Contributor Kadetotad, Deepak Vinayak (Author) / Seo, Jae-sun (Advisor) / Chakrabarti, Chaitali (Committee member) / Vrudhula, Sarma (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Subject Electrical engineering
Type Doctoral Dissertation
Extent 159 pages
Language English
Note Doctoral Dissertation Electrical Engineering 2019
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS

  Full Text
25.2 MB application/pdf
Download Count: 2

Description Dissertation/Thesis