Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
12-2023
Abstract
Humans possess a strong capability for reasoning beyond common sense. For example, given an unconventional image of a goldfish laying on the table next to an empty fishbowl, a human would effortlessly determine that the fish is not inside the fishbowl. The case, however, may be different for a vision-language model, whose reasoning could gravitate towards the common scenario that the fish is inside the bowl, despite the visual input. In this paper, we introduce a novel probing dataset named ROME (reasoning beyond commonsense knowledge) to evaluate whether the state-of-the-art pre-trained vision-language models have the reasoning capability to correctly interpret counter-intuitive content. ROME contains images that defy commonsense knowledge with regards to color, shape, material, size and positional relation. Experiments on the state-of-the-art pre-trained vision-language models reveal that most of these models are still largely incapable of interpreting counter-intuitive scenarios. We hope that ROME will spur further investigations on reasoning beyond commonsense knowledge in vision-language research.
Discipline
Artificial Intelligence and Robotics
Research Areas
Data Science and Engineering
Publication
Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10
First Page
10185
Last Page
10197
Identifier
https://doi.org/10.48550/arXiv.2310.19301
City or Country
Singapore
Citation
ZHOU, Kankan; LAI, Eason; YEONG, Au Wei Bin; MOURATIDIS, Kyriakos; and JIANG, Jing.
ROME: Evaluating pre-trained vision-language models on reasoning beyond visual common sense. (2023). Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10. 10185-10197.
Available at: https://ink.library.smu.edu.sg/sis_research/8352
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://aclanthology.org/2023.findings-emnlp.683/