Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2010

Abstract

Most existing reranking approaches to image search focus solely on mining “visual” cues within the initial search results. However, the visual information cannot always provide enough guidance to the reranking process. For example, different images with similar appearance may not always present the same relevant information to the query. Observing that multi-modality cues carry complementary relevant information, we propose the idea of co-reranking for image search, by jointly exploring the visual and textual information. Co-reranking couples two random walks, while reinforcing the mutual exchange and propagation of information relevancy across different modalities. The mutual reinforcement is iteratively updated to constrain information exchange during random walk. As a result, the visual and textual reranking can take advantage of more reliable information from each other after every iteration. Experiment results on a real-world dataset (MSRA-MM) collected from Bing image search engine shows that co-reranking outperforms several existing approaches which do not or weakly consider multi-modality interaction.

Keywords

Co-reranking, Graph model, Image search

Discipline

Data Storage Systems | Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the ACM International Conference on Image and Video Retrieval, ACM-CIVR 2010, Xi’an, China, July 5-7

First Page

34

Last Page

41

ISBN

9781450301176

Identifier

10.1145/1816041.1816048

Publisher

ACM

City or Country

Xi'an, China

Share

COinS