Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2020
Abstract
Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Research Areas
Data Science and Engineering
Publication
Proceedings of the 33rd Conference on Computer Vision and Pattern Recognition, CVPR '20, Virtual Conference, June 14-19
First Page
12245
Last Page
12254
Identifier
10.1109/CVPR42600.2020.01226
Publisher
IEEE
City or Country
Virtual Conference
Citation
1
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openaccess.thecvf.com/content_CVPR_2020/papers/Liu_Mnemonics_Training_Multi-Class_Incremental_Learning_Without_Forgetting_CVPR_2020_paper.pdf
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons