Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2023

Abstract

Weight-sharing neural architecture search aims to optimize a configurable neural network model (supernet) for a variety of deployment scenarios across many devices with different resource constraints. Existing approaches use evolutionary search to extract models of different sizes from a supernet trained on a very large data set, and then fine-tune the extracted models on the typically small, real-world data set of interest. The computational cost of training thus grows linearly with the number of different model deployment scenarios. Hence, we propose Transfer-Once-For-All (TOFA) for supernet-style training on small data sets with constant computational training cost over any number of edge deployment scenarios. Given a task, TOFA obtains custom neural networks, both the topology and the weights, optimized for any number of edge deployment scenarios. To overcome the challenges arising from small data, TOFA utilizes a unified semi-supervised training loss to simultaneously train all subnets within the supernet, coupled with on-the-fly architecture selection at deployment time.

Keywords

inference at the edge, neural architecture search, semi-supervised supernet training, weight-sharing NAS

Discipline

Systems Architecture

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 7th IEEE International Conference on Edge Computing and Communications, EDGE 2023, Chicago, IL, USA, July 2-8

First Page

26

Last Page

35

ISBN

9798350304831

Identifier

10.1109/EDGE60047.2023.00017

Publisher

IEEE Computer Society

City or Country

Los Alamitos, CA

Additional URL

https://doi.org/10.1109/EDGE60047.2023.00017

Share

COinS