Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

4-2022

Abstract

While Deep Neural Network (DNN) models have transformed machine vision capabilities, their extremely high computational complexity and model sizes present a formidable deployment roadblock for AIoT applications. We show that the complexity-vs-accuracy-vs-communication tradeoffs for such DNN models can be significantly addressed via a novel, lightweight form of “collaborative machine intelligence” that requires only runtime changes to the inference process. In our proposed approach, called ComAI, the DNN pipelines of different vision sensors share intermediate processing state with one another, effectively providing hints about objects located within their mutually-overlapping Field-of-Views (FoVs). CoMAI uses two novel techniques: (a) a secondary shallow ML model that uses features from early layers of a peer DNN to predict object confidence values in the image, and (b) a pipelined sharing of such confidence values, by collaborators, that is then used to bias a reference DNN’s outputs. We demonstrate that CoMAI (a) can boost accuracy (recall) of DNN inference by 20-50%, (b) works across heterogeneous DNN models and deployments, and (c) incurs negligible processing, bandwidth and processing overheads compared to non-collaborative baselines.

Keywords

Deep learning, runtime, machine vision, neural networks

Discipline

Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces | Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

2022 IEEE International Conference on Computer Communications, Virtual Conference, May 2-5: Proceedings

First Page

41

Last Page

50

ISBN

9781665458221

Identifier

10.1109/INFOCOM48880.2022.9796769

Publisher

IEEE

City or Country

Piscataway, NJ

Embargo Period

5-23-2022

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/INFOCOM48880.2022.9796769

Share

COinS