Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2017
Abstract
Deep learning has revolutionized vision sensing applications in terms of accuracy comparing to other techniques. Its breakthrough comes from the ability to extract complex high level features directly from sensor data. However, deep learning models are still yet to be natively supported on mobile devices due to high computational requirements. In this paper, we present DeepMon, a next generation of DeepSense [1] framework, to enable deep learning models on conventional mobile devices (e.g. Samsung Galaxy S7) for continuous vision sensing applications. Firstly, Deep-Mon exploits similarity between consecutive video frames for intermediate data caching within models to enhance inference latency. Secondly, DeepMon leverages approximation technique (e.g. Tucker decomposition) to build up approximated models with negligible impact on accuracy. Thirdly, DeepMon ofloads heavy computation onto integrated mobile GPU to significantly reduce execution time of the model.
Keywords
Continuous vision, Deep learning, Mobile GPU, Mobile sensing
Discipline
Hardware Systems | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
MobiSys 2017: Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, Niagara Falls, June 19-23
First Page
186
Last Page
186
ISBN
9781450349284
Identifier
10.1145/3081333.3089331
Publisher
ACM
City or Country
New York
Citation
HUYNH, Loc Nguyen; BALAN, Rajesh Krishna; and LEE, Youngki.
DEMO: DeepMon - Building mobile GPU Deep learning models for continuous vision applications. (2017). MobiSys 2017: Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, Niagara Falls, June 19-23. 186-186.
Available at: https://ink.library.smu.edu.sg/sis_research/3672
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org./10.1145/3081333.3089331