Conference Proceeding Article
The parsing of building facades is a key component to the problem of 3D street scenes reconstruction, which is long desired in computer vision. In this paper, we propose a deep learning based method for segmenting a facade into semantic categories. Man-made structures often present the characteristic of symmetry. Based on this observation, we propose a symmetric regularizer for training the neural network. Our proposed method can make use of both the power of deep neural networks and the structure of man-made architectures. We also propose a method to refine the segmentation results using bounding boxes generated by the Region Proposal Network. We test our method by training a FCN-8s network with the novel loss function. Experimental results show that our method has outperformed previous state-of-the-art methods significantly on both the ECP dataset and the eTRIMS dataset. As far as we know, we are the first to employ end-to-end deepconvolutional neural network on full image scale in the task of building facades parsing.
Artificial intelligence, Deep neural networks, Facades, Formal languages, Image segmentation, Neural networks, Semantics, Building facades, Convolutional neural network, Learning approach, Learning-based methods, Man-made structures, Segmentation results, Semantic category, State-of-the-art methods, Deep learning
Artificial Intelligence and Robotics | Computer Engineering
Proceeding of the 26th International Joint Conference on Artificial Intelligence, IJCAI 2017; Code 130864, Melbourne, Australia, 2017 August 19 - 25
City or Country
LIU, Hantang; ZHANG, Jialiang; ZHU, Jianke; and HOI, Steven C. H..
Deepfacade: A deep learning approach to facade parsing. (2017). Proceeding of the 26th International Joint Conference on Artificial Intelligence, IJCAI 2017; Code 130864, Melbourne, Australia, 2017 August 19 - 25. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3849
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.