Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
10-2021
Abstract
Attention module does not always help deep models learn causal features that are robust in any confounding context, e.g., a foreground object feature is invariant to different backgrounds. This is because the confounders trick the attention to capture spurious correlations that benefit the prediction when the training and testing data are IID (identical & independent distribution); while harm the prediction when the data are OOD (out-of-distribution). The sole fundamental solution to learn causal attention is by causal intervention, which requires additional annotations of the confounders, e.g., a “dog” model is learned within “grass+dog” and “road+dog” respectively, so the “grass” and “road” contexts will no longer confound the “dog” recognition. However, such annotation is not only prohibitively expensive, but also inherently problematic, as the confounders are elusive in nature. In this paper, we propose a causal attention module (CaaM) that self-annotates the confounders in unsupervised fashion. In particular, multiple CaaMs can be stacked and integrated in conventional attention CNN and self-attention Vision Transformer. In OOD settings, deep models with CaaM outperform those without it significantly; even in IID settings, the attention localization is also improved by CaaM, showing a great potential in applications that require robust visual saliency. Codes are available at https://github.com/ Wangt-CN/CaaM.
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of 2021 International Conference on Computer Vision, Virtual Conference, October 11-17
First Page
3091
Last Page
3100
City or Country
Virtual Conference
Citation
WANG, Tan; ZHOU, Chang; SUN, Qianru; and ZHANG, Hanwang.
Causal attention for unbiased visual recognition. (2021). Proceedings of 2021 International Conference on Computer Vision, Virtual Conference, October 11-17. 3091-3100.
Available at: https://ink.library.smu.edu.sg/sis_research/6228
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.