Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
9-2007
Abstract
Multimedia-based ontology construction and reasoning have recently been recognized as two important issues in video search, particularly for bridging semantic gap. The lack of coincidence between low-level features and user expectation makes concept-based ontology reasoning an attractive midlevel framework for interpreting high-level semantics. In this paper, we propose a novel model, namely ontology-enriched semantic space (OSS), to provide a computable platform for modeling and reasoning concepts in a linear space. OSS enlightens the possibility of answering conceptual questions such as a high coverage of semantic space with minimal set of concepts, and the set of concepts to be developed for video search. More importantly, the query-to-concept mapping can be more reasonably conducted by guaranteeing the uniform and consistent comparison of concept scores for video search. We explore OSS for several tasks including conceptbased video search, word sense disambiguation and multimodality fusion. Our empirical findings show that OSS is a feasible solution to timely issues such as the measurement of concept combination and query-concept dependent fusion.
Keywords
Concept-based video search, Ontology, Semantic space
Discipline
Data Storage Systems | Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the 15th ACM International Conference on Multimedia, MM2007, Augsburg, Bavaria, September 23-28
First Page
981
Last Page
990
ISBN
9781595937025
Identifier
10.1145/1291233.1291447
Publisher
ACM
City or Country
Augsburg, Bavaria
Citation
WEI, Xiao-Yong and NGO, Chong-wah.
Ontology-enriched semantic space for video search. (2007). Proceedings of the 15th ACM International Conference on Multimedia, MM2007, Augsburg, Bavaria, September 23-28. 981-990.
Available at: https://ink.library.smu.edu.sg/sis_research/6526
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.