Weakly-supervised image segmentation is a challenging problem with multidisciplinary applications in multimedia content analysis and beyond. It aims to segment an image by leveraging its imagelevel semantics (i.e., tags). This paper presents a weakly-supervised image segmentation algorithm that learns the distribution of spatially structural superpixel sets from image-level labels. More specifically, we first extract graphlets from a given image, which are small-sized graphs consisting of superpixels and encapsulating their spatial structure. Then, an ecient manifold embedding algorithm is proposed to transfer labels from training images into graphlets. It is further observed that there are numerous redundant graphlets that are not discriminative to semantic categories, which are abandoned by a graphlet selection scheme as they make no contribution to the subsequent segmentation. Thereafter, we use a Gaussian mixture model (GMM) to learn the distribution of the selected post-embedding graphlets (i.e., vectors output from the graphlet embedding). Finally, we propose an image segmentation algorithm, termed representative graphlet cut, which leverages the learned GMM prior to measure the structure homogeneity of a test image. Experimental results show that the proposed approach outperforms state-ofthe- art weakly-supervised image segmentation methods, on five popular segmentation data sets. Besides, our approach performs competitively to the fully-supervised segmentation models.
Structure cues, active learning, graphlet, segmentation, weakly supervised
Databases and Information Systems
Data Management and Analytics
IEEE Transactions on Multimedia
ZHANG, Luming; GAO, Yue; XIA, Yingjie; LU, Ke; SHEN, Jialie; and JI, Rongrong.
Representative discovery of structure cues for weakly-supervised image segmentation. (2014). IEEE Transactions on Multimedia. 16, (2), 470-479. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/1956
Copyright Owner and License
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.