Publication Type
Journal Article
Version
acceptedVersion
Publication Date
1-2013
Abstract
With the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Finally, comparative results of the experiments conducted on a dataset including 370+ images are presented, which demonstrate the effectiveness of the proposed approach.
Keywords
Hypergraph Learning, Social image search, Tag, Visual-textual
Discipline
Databases and Information Systems
Publication
IEEE Transactions on Image Processing
Volume
22
Issue
1
First Page
363
Last Page
376
ISSN
1057-7149
Identifier
10.1109/TIP.2012.2202676
Publisher
IEEE
Citation
GAO, Yue; WANG, Meng; ZHA, Zheng-Jun; SHEN, Jialie; LI, Xuelong; and WU, Xindong.
Visual-textual joint relevance learning for tag-based social image search. (2013). IEEE Transactions on Image Processing. 22, (1), 363-376.
Available at: https://ink.library.smu.edu.sg/sis_research/1511
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org/10.1109/TIP.2012.2202676