Publication Type

Journal Article

Version

publishedVersion

Publication Date

8-2012

Abstract

Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training exemplars without human assistance. This paper proposes a novel and efficient approach to collect training samples from the noisily tagged Web images for visual concept learning, where we try to maximize two important criteria, relevancy and coverage, of the automatically generated training sets. For the former, a simple method named semantic field is introduced to handle the imprecise and incomplete image tags. Specifically, the relevancy of an image to a target concept is predicted by collectively analyzing the associated tag list of the image using two knowledge sources: WordNet corpus and statistics from Flickr.com. To boost the coverage or diversity of the training sets, we further propose an ontology-based hierarchical pooling method to collect samples not only based on the target concept alone, but also from ontologically neighboring concepts. Extensive experiments on three different datasets (NUS-WIDE, PASCAL VOC, and ImageNet) demonstrate the effectiveness of our proposed approach, producing competitive performance even when comparing with concept classifiers learned using expert-labeled training examples.

Keywords

Training set construction, visual concept learning, web images

Discipline

Computer Sciences | Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Publication

IEEE Transactions on Multimedia

Volume

14

Issue

4

First Page

1068

Last Page

1078

ISSN

1520-9210

Identifier

10.1109/TMM.2012.2190387

Publisher

Institute of Electrical and Electronics Engineers

Share

COinS