Conference Proceeding Article
One of the keys issues to content-based image retrieval is the similarity measurement of images. Images are represented as points in the space of low-level visual features and most similarity measures are based on certain distance measurement between these features. Given a distance metric, two images with shorter distance are deemed to more similar than images that are far away. The well-known problem with these similarity measures is the semantic gap, namely two images separated by large distance could share the same semantic content. In this paper, we propose a novel similarity measure of images that goes beyond the distance measurement. The key idea is to exploit the clustering structure of images when a large number of images are present. The similarity of two images is determined not only by their Euclidean distance in the space of visual features but also by the likelihood for them to be clustered together, which is further estimated using a marginalized kernel. Our empirical studies with COREL datasets have shown that the proposed similarity measure is effective for traditional content-based image retrieval as well as user relevance feedback.
Computer Sciences | Databases and Information Systems
Data Management and Analytics
Large-Scale Semantic Access to Content (Text, Image, Video and Sound): Proceedings of RIAO 8th Conference 2007, May 30 - June 1, Pittsburgh, PA
Le Centre De Hautes Etudes Internationales D'informatique Documentaire
City or Country
KANG, Feng; JIN, Rong; and HOI, Steven C. H..
Similarity Beyond Distance Measurement. (2007). Large-Scale Semantic Access to Content (Text, Image, Video and Sound): Proceedings of RIAO 8th Conference 2007, May 30 - June 1, Pittsburgh, PA. 449-460. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/2387
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.