Skip to main content

Hardware Acceleration of Deep Convolutional Neural Networks on FPGA


Abstract The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency of GPU and other general-purpose platform, bringing opportunities for specific acceleration hardware, e.g. FPGA, by customizing the digital circuit specific for the deep learning algorithm inference. However, deploying CNNs on portable and embedded systems is still challenging due to large data volume, intensive computation, varying algorithm structures, and frequent memory accesses. Thi... (more)
Created Date 2018
Contributor Ma, Yufei (Author) / Vrudhula, Sarma (Advisor) / Seo, Jae-sun (Advisor) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Subject Electrical engineering / Computer engineering / Artificial intelligence / Computer Vision / Convolutional Neural Networks / FPGA / Hardware Accelerator
Type Doctoral Dissertation
Extent 169 pages
Language English
Copyright
Note Doctoral Dissertation Electrical Engineering 2018
Collaborating Institutions Graduate College / ASU Library
Additional Formats MODS / OAI Dublin Core / RIS


  Full Text
8.9 MB application/pdf
Download Count: 4401

Description Dissertation/Thesis