Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
12-2023
Abstract
Multimodal question answering (MMQA), which aims to derive the answer from multiple knowledge modalities (e.g., text, tables, and images), has received increasing attention due to its board applications. Current approaches to MMQA often rely on single-modal or bi-modal QA models, which limits their ability to effectively integrate information across all modalities and leverage the power of pre-trained language models. To address these limitations, we propose a novel framework called UniMMQA, which unifies three different input modalities into a text-to-text format by employing position-enhanced table linearization and diversified image captioning techniques. Additionally, we enhance cross-modal reasoning by incorporating a multimodal rationale generator, which produces textual descriptions of cross-modal relations for adaptation into the text-to-text generation process. Experimental results on three MMQA benchmark datasets show the superiority of UniMMQA in both supervised and unsupervised settings.
Keywords
Cross-modal, Input modalities, Language model, Linearisation, Multi-modal, Power, Question Answering, Single-modal, Text format
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Research Areas
Data Science and Engineering
Areas of Excellence
Digital transformation
Publication
Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10
First Page
9355
Last Page
9367
ISBN
9798891760615
Identifier
10.18653/v1/2023.findings-emnlp.626
Publisher
Association for Computational Linguistics
City or Country
Texas
Citation
LUO, Haohao; SHEN, Ying; and DENG, Yang.
Unifying text, tables, and images for multimodal question answering. (2023). Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10. 9355-9367.
Available at: https://ink.library.smu.edu.sg/sis_research/9120
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.18653/v1/2023.findings-emnlp.626
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons