Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2023

Abstract

Multimodal question answering (MMQA), which aims to derive the answer from multiple knowledge modalities (e.g., text, tables, and images), has received increasing attention due to its board applications. Current approaches to MMQA often rely on single-modal or bi-modal QA models, which limits their ability to effectively integrate information across all modalities and leverage the power of pre-trained language models. To address these limitations, we propose a novel framework called UniMMQA, which unifies three different input modalities into a text-to-text format by employing position-enhanced table linearization and diversified image captioning techniques. Additionally, we enhance cross-modal reasoning by incorporating a multimodal rationale generator, which produces textual descriptions of cross-modal relations for adaptation into the text-to-text generation process. Experimental results on three MMQA benchmark datasets show the superiority of UniMMQA in both supervised and unsupervised settings.

Keywords

Cross-modal, Input modalities, Language model, Linearisation, Multi-modal, Power, Question Answering, Single-modal, Text format

Discipline

Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Data Science and Engineering

Areas of Excellence

Digital transformation

Publication

Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10

First Page

9355

Last Page

9367

ISBN

9798891760615

Identifier

10.18653/v1/2023.findings-emnlp.626

Publisher

Association for Computational Linguistics

City or Country

Texas

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.18653/v1/2023.findings-emnlp.626

Share

COinS