Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

11-2021

Abstract

Despite significant progress has been achieved in text summarization, factual inconsistency in generated summaries still severely limits its practical applications. Among the key factors to ensure factual consistency, a reliable automatic evaluation metric is the first and the most crucial one. However, existing metrics either neglect the intrinsic cause of the factual inconsistency or rely on auxiliary tasks, leading to an unsatisfied correlation with human judgments or increasing the inconvenience of usage in practice. In light of these challenges, we propose a novel metric to evaluate the factual consistency in text summarization via counterfactual estimation, which formulates the causal relationship among the source document, the generated summary, and the language prior. We remove the effect of language prior, which can cause factual inconsistency, from the total causal effect on the generated summary, and provides a simple yet effective way to evaluate consistency without relying on other auxiliary tasks. We conduct a series of experiments on three public abstractive text summarization datasets, and demonstrate the advantages of the proposed metric in both improving the correlation with human judgments and the convenience of usage. The source code is available at https://github.com/xieyxclack/factual_coco.

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering

Areas of Excellence

Digital transformation

Publication

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtual Conference, November 7-11

First Page

100

Last Page

110

Identifier

10.18653/v1/2021.findings-emnlp.10

Publisher

Association for Computational Linguistics

City or Country

USA

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.18653/v1/2021.findings-emnlp.10

Share

COinS