Publication Type
PhD Dissertation
Version
publishedVersion
Publication Date
9-2019
Abstract
Over the past few years, deep learning has emerged as state-of-the-art solutions for many challenging computer vision tasks such as face recognition, object detection, etc. Despite of its outstanding performance, deep neural networks (DNNs) are computational intensive, which prevent them to be widely adopted on billions of mobile and embedded devices with scarce resources. To address that limitation, we
focus on building systems and optimization algorithms to accelerate those models, making them more computational-efficient.
First, this thesis explores the computational capabilities of different existing processors (or co-processors) on modern mobile devices. It recognizes that by leveraging the mobile Graphics Processing Units (mGPUs), we can reduce the time consumed in the deep learning inference pipeline by an order of magnitude. We further investigated variety of optimizations that work on the mGPUs for more accelerations and built the DeepSense framework to demonstrate their uses.
Second, we also discovered that video streams often contain invariant regions (e.g., background, static objects) across multiple video frames. Processing those regions from frame to frame would waste a lot of computational power. We proposed a convolutional caching technique and built a DeepMon framework that quickly determines the static regions and intelligently skips the computations on those regions during the deep neural network processing pipeline.
The thesis also explores how to make deep learning models more computational-efficient by pruning unnecessary parameters. Many studies have shown that most of the computations occurred within convolutional layers, which are widely used in convolutional neural networks (CNNs) for many computer vision tasks. We designed a novel D-Pruner algorithm that allows us to score the parameters based on
how important they are to the final performance. Parameters with little impacts will be removed for smaller, faster and more computational-efficient models.
Finally, we investigated the feasibility of using multi-exit models (MXNs), which consist many neural networks with shared-layers, as an efficient implementation to accelerate many existing computer vision tasks. We show that applying techniques such as aggregating results cross exits, threshold-based early exiting with MXNs can significantly speed up the inference latency in indexed video querying and face
recognition systems.
Keywords
Deep learning, deep neural network, mobile deep learning, model approximation, model pruning, specialized model, multi-exit models, anytime neural network
Degree Awarded
PhD in Information Systems
Discipline
Programming Languages and Compilers | Software Engineering
Supervisor(s)
LEE, Youngki; BALAN, Rajesh Krishna
Publisher
Singapore Management University
City or Country
Singapore
Citation
HUYNH, Nguyen Loc.
Exploiting approximation, caching and specialization to accelerate vision sensing applications. (2019).
Available at: https://ink.library.smu.edu.sg/etd_coll/242
Copyright Owner and License
Author
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.