Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

10-2022

Abstract

We are interested in learning robust models from insufficient data, without the need for any externally pre-trained model checkpoints. First, compared to sufficient data, we show why insufficient data renders the model more easily biased to the limited training environments that are usually different from testing. For example, if all the training "swan" samples are "white", the model may wrongly use the "white" environment to represent the intrinsic class "swan". Then, we justify that equivariance inductive bias can retain the class feature while invariance inductive bias can remove the environmental feature, leaving only the class feature that generalizes to any testing environmental changes. To impose them on learning, for equivariance, we demonstrate that any off-the-shelf contrastive-based self-supervised feature learning method can be deployed; for invariance, we propose a class-wise invariant risk minimization (IRM) that efficiently tackles the challenge of missing environmental annotation in conventional IRM. State-of-the-art experimental results on real-world visual benchmarks (NICO and VIPriors ImageNet) validate the great potential of the two inductive biases in reducing training data and parameters significantly.

Keywords

Inductive Bias, Equivariance, Invariant Risk Minimization

Discipline

Databases and Information Systems | Graphics and Human Computer Interfaces | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

Computer Vision ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27: Proceedings

Volume

13671

First Page

241

Last Page

258

ISBN

9783031200830

Identifier

10.1007/978-3-031-20083-0_15

Publisher

Springer

City or Country

Cham

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1007/978-3-031-20083-0_15

Share

COinS