Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
12-2023
Abstract
Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e.g., class identity) and domain-specific features (e.g., environment) that do not generalize to the target domain. Unfortunately, even enriched with additional unsupervised target domains, existing Unsupervised DA (UDA) methods still suffer from it. This is because the source domain supervision only considers the target domain samples as auxiliary data (e.g., by pseudo-labeling), yet the inherent distribution in the target domain—where the valuable de-correlation clues hide—is disregarded. We propose to make the U in UDA matter by giving equal status to the two domains. Specifically, we learn an invariant classifier whose prediction is simultaneously consistent with the labels in the source domain and clusters in the target domain, hence the spurious correlation inconsistent in the target domain is removed. We dub our approach “Invariant CONsistency learning” (ICON). Extensive experiments show that ICON achieves state-of-the-art performance on the classic UDA benchmarks: OFFICE-HOME and VISDA-2017, and outperforms all the conventional methods on the challenging WILDS2.0 benchmark.
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering
Publication
Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, December 10-16
First Page
1
Last Page
20
Publisher
NeurIPS
City or Country
New Orleans
Citation
YUE, Zhongqi; SUN, Qianru; and ZHANG, Hanwang.
Make the U in UDA matter: Invariant consistency learning for unsupervised domain adaptation. (2023). Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, December 10-16. 1-20.
Available at: https://ink.library.smu.edu.sg/sis_research/8474
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.