Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
6-2023
Abstract
Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic frame-work inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs without sacrificing accuracy. Code is available at https://github.com/SCY-X/Capacity-Dynamic-Distillation.
Keywords
Deep learning architectures and techniques
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Research Areas
Data Science and Engineering
Publication
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): Vancouver, June 17-24: Proceedings
First Page
16006
Last Page
16015
ISBN
9798350301298
Identifier
10.1109/CVPR52729.2023.01536
Publisher
IEEE
City or Country
Piscataway, NJ
Citation
XIE, Yi; ZHANG, Huaidong; XU, Xuemiao; ZHU, Jianqing; and HE, Shengfeng.
Towards a smaller student: Capacity dynamic distillation for efficient image retrieval. (2023). 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): Vancouver, June 17-24: Proceedings. 16006-16015.
Available at: https://ink.library.smu.edu.sg/sis_research/8448
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Additional URL
https://doi.org/10.1109/CVPR52729.2023.01536
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons