Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2018
Abstract
A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answer-reranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets.
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering
Publication
Proceedings of the 6th International Conference on Learning Representation, Vancouver, Canada, 2018 April 30 - May 3
First Page
1
Last Page
14
City or Country
Vancouver, Canada
Citation
WANG, Shuohang; YU, Mo; JIANG, Jing; ZHANG, Wei; GUO, Xiaoxiao; CHANG, Shiyu; WANG, Zhiguo; KLINGER, Tim; TESAURO, Gerald; and CAMPBELL, Murray.
Evidence aggregation for answer re-ranking in open-domain question answering. (2018). Proceedings of the 6th International Conference on Learning Representation, Vancouver, Canada, 2018 April 30 - May 3. 1-14.
Available at: https://ink.library.smu.edu.sg/sis_research/4238
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.