Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
10-2021
Abstract
Existing Unsupervised Domain Adaptation (UDA) literature adopts the covariate shift and conditional shift assumptions, which essentially encourage models to learn common features across domains. However, due to the lack of supervision in the target domain, they suffer from the semantic loss: the feature will inevitably lose nondiscriminative semantics in source domain, which is however discriminative in target domain. We use a causal view—transportability theory [41]—to identify that such loss is in fact a confounding effect, which can only be removed by causal intervention. However, the theoretical solution provided by transportability is far from practical for UDA, because it requires the stratification and representation of the unobserved confounder that is the cause of the domain gap. To this end, we propose a practical solution: Transporting Causal Mechanisms (TCM), to identify the confounder stratum and representations by using the domain-invariant disentangled causal mechanisms, which are discovered in an unsupervised fashion. Our TCM is both theoretically and empirically grounded. Extensive experiments show that TCM achieves state-of-theart performance on three challenging UDA benchmarks: ImageCLEF-DA, Office-Home, and VisDA-2017. Codes are available at https://github.com/yue-zhongqi/ tcm.
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of 2021 International Conference on Computer Vision, Virtual Conference, October 11-17
First Page
1
Last Page
16
City or Country
Virtual Conference
Citation
YUE, Zhongqi; SUN, Qianru; HUA, Xian-Sheng; and ZHANG, Hanwang.
Transporting causal mechanisms for unsupervised domain adaptation. (2021). Proceedings of 2021 International Conference on Computer Vision, Virtual Conference, October 11-17. 1-16.
Available at: https://ink.library.smu.edu.sg/sis_research/6229
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.