Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
10-2022
Abstract
This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT’s uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal reasoning; and 2) it exploits disentangled video and text Transformers for relevance comparison between the video and text to perform QA, instead of entangled crossmodal Transformer for answer classification. Vision-text communication is done by additional cross-modal interaction modules. With more reasonable video encoding and QA solution, we show that VGT can achieve much better performances on VideoQA tasks that challenge dynamic relation reasoning than prior arts in the pretraining-free scenario. Its performances even surpass those models that are pretrained with millions of external data. We further show that VGT can also benefit a lot from selfsupervised cross-modal pretraining, yet with orders of magnitude smaller data. These results clearly demonstrate the effectiveness and superiority of VGT, and reveal its potential for more data-efficient pretraining. With comprehensive analyses and some heuristic observations, we hope that VGT can promote VQA research beyond coarse recognition/description towards fine-grained relation reasoning in realistic videos. Our code is available at https://github.com/sail-sg/VGT .
Keywords
Dynamic visual graph, Transformer, VideoQA
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the 17th European Conference (ECCV 2022), Tel Aviv, Israel, October 23-27
First Page
39
Last Page
58
ISBN
9783031200588
Identifier
10.1007/978-3-031-20059-5_3
Publisher
Springer
City or Country
Cham
Citation
XIAO, Junbin; ZHOU, Pan; CHUA, Tat-Seng; and YAN, Shuicheng.
Video graph transformer for video question answering. (2022). Proceedings of the 17th European Conference (ECCV 2022), Tel Aviv, Israel, October 23-27. 39-58.
Available at: https://ink.library.smu.edu.sg/sis_research/8994
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/978-3-031-20059-5_3