Conference Proceeding Article
Cross-modal hashing (CMH) is an efficient technique for the fast retrieval of web image data, and it has gained a lot of attentions recently. However, traditional CMH methods usually apply batch learning for generating hash functions and codes. They are inefficient for the retrieval of web images which usually have streaming fashion. Online learning can be exploited for CMH. But existing online hashing methods still cannot solve two essential problems: Efficient updating of hash codes and analysis of cross-modal correlation. In this paper, we propose Online Cross-modal Hashing (OCMH) which can effectively address the above two problems by learning the shared latent codes (SLC). In OCMH, hash codes can be represented by the permanent SLC and dynamic transfer matrix. Therefore, inefficient updating of hash codes is transformed to the efficient updating of SLC and transfer matrix, and the time complexity is irrelevant to the database size. Moreover, SLC is shared by all the modalities, and thus it can encode the latent cross-modal correlation, which further improves the overall cross-modal correlation between heterogeneous data. Experimental results on two real-world multi-modal web image datasets: MIR Flickr and NUS-WIDE, demonstrate the effectiveness and efficiency of OCMH for online cross-modal web image retrieval.
Cross-modal hashing, image retrieval, codes, transfer matrix
Computer Sciences | Databases and Information Systems
Data Management and Analytics
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16): Phoenix, AZ, February 12-17, 2016
City or Country
Pala Alto, CA
XIE, Liang; SHEN, Jialie; and ZHU, Lei.
Online cross-modal hashing for web image retrieval. (2016). Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16): Phoenix, AZ, February 12-17, 2016. 294-300. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3538
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.