Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

3-2024

Abstract

Previous solutions to knowledge-based visual question answering (K-VQA) retrieve knowledge from external knowledge bases and use supervised learning to train the K-VQA model. Recently pre-trained LLMs have been used as both a knowledge source and a zero-shot QA model for K-VQA and demonstrated promising results. However, these recent methods do not explicitly show the knowledge needed to answer the questions and thus lack interpretability. Inspired by recent work on knowledge generation from LLMs for text-based QA, in this work we propose and test a similar knowledge-generation-based K-VQA method, which first generates knowledge from an LLM and then incorporates the generated knowledge for K-VQA in a zero-shot manner. We evaluate our method on two K-VQA benchmarks and found that our method performs better than previous zero-shot K-VQA methods and our generated knowledge is generally relevant and helpful.

Discipline

Artificial Intelligence and Robotics | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

EACL 2024: Conference of the European Chapter of the Association for Computational Linguistics, St Julian's, Malta, March 17-22: Findings

First Page

533

Last Page

549

ISBN

9798891760936

Publisher

Association for Computational Linguistics (ACL)

City or Country

St. Julian's

Copyright Owner and License

Authors

Additional URL

https://aclanthology.org/2024.findings-eacl.36

Share

COinS