Publication Type
Conference Proceeding Article
Version
submittedVersion
Publication Date
12-2022
Abstract
Out-of-distribution (OOD) settings are used to measure a model’s performance when the distribution of the test data is different from that of the training data. NLU models are known to suffer in OOD settings (Utama et al., 2020b). We study this issue from the perspective of causality, which sees confounding bias as the reason for models to learn spurious correlations. While a common solution is to perform intervention, existing methods handle only known and single confounder, but in many NLU tasks the confounders can be both unknown and multifactorial. In this paper, we propose a novel interventional training method called Bottom-up Automatic Intervention (BAI) that performs multi-granular intervention with identified multifactorial confounders. Our experiments on three NLU tasks, namely, natural language inference, fact verification and paraphrase identification, show the effectiveness of BAI for tackling OOD settings.
Keywords
Natural language understanding, Out-of-domain detection, Dialogue system, Text classification
Discipline
Artificial Intelligence and Robotics
Research Areas
Data Science and Engineering
Publication
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, December 7 - 11
City or Country
Abu Dhabi
Citation
YU, Sicheng; JIANG, Jing; ZHANG, Hao; NIU, Yulei; SUN, Qianru; and BING, Lidong.
Interventional training for out-of-distribution natural language understanding. (2022). Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, December 7 - 11.
Available at: https://ink.library.smu.edu.sg/sis_research/7548
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.