Publication Type
Journal Article
Version
publishedVersion
Publication Date
1-2020
Abstract
Video hyperlinking is a task aiming to enhance the accessibility of large archives, by establishing links between fragments of videos. The links model the aboutness between fragments for efficient traversal of video content. This paper addresses the problem of link construction from the perspective of cross-modal embedding. To this end, a generalized multi-modal auto-encoder is proposed.& x00A0;The encoder learns two embeddings from visual and speech modalities, respectively, whereas each of the embeddings performs self-modal and cross-modal translation of modalities. Furthermore, to preserve the neighbourhood structure of fragments, which is important for video hyperlinking, the auto-encoder is devised to model data distribution of fragments in a dataset. Experiments are conducted on Blip10000 dataset using the anchor fragments provided by TRECVid Video Hyperlinking (LNK) task over the years of 2016 and 2017. This paper shares the empirical insights on a number of issues in cross-modal learning, including the preservation of neighbourhood structure in embedding, model fine-tuning and issue of missing modality, for video hyperlinking.
Keywords
Task analysis, Visualization, Joining processes, Gallium nitride, Benchmark testing, Feature extraction, Neural networks, Video hyperlinking, cross-modal translation, structure-preserving learning
Discipline
Graphics and Human Computer Interfaces | OS and Networks
Research Areas
Intelligent Systems and Optimization
Publication
IEEE Transactions on Multimedia
Volume
22
Issue
1
First Page
188
Last Page
200
ISSN
1520-9210
Identifier
10.1109/TMM.2019.2923121
Publisher
Institute of Electrical and Electronics Engineers
Citation
1
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.