Conference Proceeding Article
In many real-word scenarios, e.g., multimedia applications, data often originates from multiple heterogeneous sources or are represented by diverse types of representation, which is often referred to as "multi-modal data". The definition of distance between any two objects/items on multi-modal data is a key challenge encountered by many real-world applications, including multimedia retrieval. In this paper, we present a novel online learning framework for learning distance functions on multi-modal data through the combination of multiple kernels. In order to attack large-scale multimedia applications, we propose Online Multi-modal Distance Learning (OMDL) algorithms, which are significantly more efficient and scalable than the state-of-the-art techniques. We conducted an extensive set of experiments on multi-modal image retrieval applications, in which encouraging results validate the efficacy of the proposed technique
Computer Sciences | Databases and Information Systems
Data Management and Analytics
WSDM '13: Proceedings of the 6th ACM International Conference on Web Search and Data Mining: February 4-8, 2013, Rome, Italy
City or Country
XIA, Hao; WU, Pengcheng; and HOI, Steven C. H..
Online Multi-modal Distance Learning for Scalable Multimedia Retrieval. (2013). WSDM '13: Proceedings of the 6th ACM International Conference on Web Search and Data Mining: February 4-8, 2013, Rome, Italy. 455-464. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/2337