Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

2-2011

Abstract

Tags of social images play a central role for text-based social image retrieval and browsing tasks. However, the original tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In this paper, we aim to overcome the challenge of social tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assume some parametric models, our method is completely data-driven and makes no assumption of the underlying models, making the proposed solution practically more effective. We formally formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks, in which encouraging results showed that the proposed method is more effective than the conventional approaches

Keywords

Annotation, Image search, Optimization, Recommendation, Social images, Tag ranking, Two-view learning

Discipline

Computer Sciences | Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

WSDM '11: Proceedings of the 4th ACM International Conference on Web Search and Data Mining: Hong Kong, China, February 9-12

First Page

625

Last Page

634

ISBN

9781450304931

Identifier

10.1145/1935826.1935913

Publisher

ACM

City or Country

New York

Copyright Owner and License

Publisher

Additional URL

https://doi.org/10.1145/1935826.1935913

Share

COinS