Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
11-2007
Abstract
We present a simple and practical approach for segmenting un-occluded items in a scene by actively casting shadows. By ‘items’, we refer to objects (or part of objects) enclosed by depth edges. Our approach utilizes the fact that under varying illumination, un-occluded items will cast shadows on occluded items or background, but will not be shadowed themselves.We employ an active illumination approach by taking multiple images under different illumination directions, with illumination source close to the camera. Our approach ignores the texture edges in the scene and uses only the shadow and silhouette information to determine the occlusions. We show that such a segmentation does not require the estimation of a depth map or 3D information, which can be cumbersome, expensive and often fails due to the lack of texture and presence of specular objects in the scene. Our approach can handle complex scenes with self-shadows and specularities. Results on several real scenes along with the analysis of failure cases are presented.
Keywords
Closed Contour, Shadow Region, Ratio Image, Complex Scene, Cast Shadow
Discipline
Computer Engineering
Research Areas
Data Science and Engineering
Publication
Proceedings of the 8th Asian Conference on Computer Vision, Tokyo, Japan, 2007 November 18 - 22
Volume
4843
First Page
945
Last Page
955
ISBN
9783540763857
Identifier
10.1007/978-3-540-76386-4_90
Publisher
Springer, Berlin, Heidelberg
City or Country
Tokyo, Japan
Citation
KOH, Tze K; AGRAWAL, Amit; RASKAR, Ramesh; MORGAN, Steve; MILES, Nicholas; and HAYES-GILL, Barrie.
Detecting and segmenting un-occluded items by actively casting shadows. (2007). Proceedings of the 8th Asian Conference on Computer Vision, Tokyo, Japan, 2007 November 18 - 22. 4843, 945-955.
Available at: https://ink.library.smu.edu.sg/cis_research/26
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/978-3-540-76386-4_90