Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

6-2020

Abstract

Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.

Keywords

Training, Optimization, Data models, Computational modeling, Generative adversarial networks, Gallium nitride, Training data

Discipline

Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Data Science and Engineering

Publication

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): Virtual Conference, June 14-19: Proceedings

First Page

12245

Last Page

12254

ISBN

9781728171685

Identifier

10.1109/CVPR42600.2020.01226

Publisher

IEEE

City or Country

Piscataway, NJ

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/CVPR42600.2020.01226

Share

COinS