Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2016

Abstract

Top-down saliency detection is a knowledge-driven search task. While some previous methods aim to learn this "knowledge" from category-specific data, others transfer existing annotations in a large dataset through appearance matching. In contrast, we propose in this paper a locateby-exemplar strategy. This approach is challenging, as we only use a few exemplars (up to 4) and the appearances among the query object and the exemplars can be very different. To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects. The first stage is for learning object-to-object association, and the second stage is to learn background discrimination. Extensive experimental evaluations show that the proposed method outperforms different baselines and the category-specific models. In addition, we explore the influence of exemplar properties, in terms of exemplar number and quality. Furthermore, we show that the learned model is a universal model and offers great generalization to unseen objects.

Keywords

Computer vision, Visualization, Feature extraction, Network architecture

Discipline

Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces | Systems Architecture

Research Areas

Software and Cyber-Physical Systems

Publication

Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, USA, June 27-30

First Page

5723

Last Page

5732

ISBN

9781467388528

Identifier

10.1109/CVPR.2016.617

Publisher

IEEE Computer Society

City or Country

New York, NY, USA

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/CVPR.2016.617

Share

COinS