Conference Proceeding Article
Tags of social images play a central role for text-based social image retrieval and browsing tasks. However, the original tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In this paper, we aim to overcome the challenge of social tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assume some parametric models, our method is completely data-driven and makes no assumption of the underlying models, making the proposed solution practically more effective. We formally formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks, in which encouraging results showed that the proposed method is more effective than the conventional approaches
Computer Sciences | Databases and Information Systems
Data Management and Analytics
WSDM'11: Proceedings of the 4th ACM International Conference on Web Search and Data Mining: Hong Kong, China, February 9-12, 2011
City or Country
ZHUANG, Jinfeng and HOI, Steven.
A Two-View Learning Approach for Image Tag Ranking. (2011). WSDM'11: Proceedings of the 4th ACM International Conference on Web Search and Data Mining: Hong Kong, China, February 9-12, 2011. 625-634. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/2353
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.