Publication Type
Journal Article
Version
submittedVersion
Publication Date
9-2024
Abstract
Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image and text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. However, this generation pattern lacks deterministic constraints for both appearance and motion, leading to uncontrollable and undesirable outcomes. To this end, we propose a new task called Text-driven Video Prediction (TVP). Taking the first frame and text caption as inputs, this task aims to synthesize the following frames. Specifically, appearance and motion components are provided by the image and caption separately. The key to addressing the TVP task depends on fully exploring the underlying motion information in text descriptions, thus facilitating plausible video generation. In fact, this task is intrinsically a cause-and-effect problem, as the text content directly influences the motion changes of frames. To investigate the capability of text in causal inference for progressive motion information, our TVP framework contains a Text Inference Module (TIM), producing stepwise embeddings to regulate motion inference for subsequent frames. In particular, a refinement mechanism incorporating global motion semantics guarantees coherent generation. Extensive experiments are conducted on Something-Something V2 and Single Moving MNIST datasets. Experimental results demonstrate that our model achieves better results over other baselines, verifying the effectiveness of the proposed framework.
Keywords
Text-driven Video Prediction, controllable video generation, motion inference
Discipline
Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
ACM Transactions on Multimedia Computing, Communications and Applications
Volume
20
Issue
9
First Page
1
Last Page
15
ISSN
1551-6857
Identifier
10.1145/3675171
Publisher
Association for Computing Machinery (ACM)
Citation
SONG, Xue; CHEN, Jingjing; ZHU, Bin; and JIANG, Yu-gang.
Text-driven video prediction. (2024). ACM Transactions on Multimedia Computing, Communications and Applications. 20, (9), 1-15.
Available at: https://ink.library.smu.edu.sg/sis_research/9356
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3675171
Included in
Artificial Intelligence and Robotics Commons, Graphics and Human Computer Interfaces Commons