Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
12-2017
Abstract
This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG^2 utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128x64 re-identification images and 256x256 fashion photos show that our model generates high-quality person images with convincing details.
Keywords
Person image generation, pose estimation, generative adversarial networks
Discipline
Artificial Intelligence and Robotics | Computer Sciences | Numerical Analysis and Scientific Computing
Research Areas
Data Science and Engineering
Publication
Advances in Neural Information Processing Systems 31 (NIPS 2017)
First Page
406
Last Page
416
City or Country
Long Beach, CA
Citation
MA, Liqian; JIA, Xu; SUN, Qianru; SCHIELE, Bernt; TUYTELAARS, Tinne; and VAN GOOL, Luc.
Pose guided person image generation. (2017). Advances in Neural Information Processing Systems 31 (NIPS 2017). 406-416.
Available at: https://ink.library.smu.edu.sg/sis_research/4458
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://papers.nips.cc/paper/6644-pose-guided-person-image-generation
Included in
Artificial Intelligence and Robotics Commons, Numerical Analysis and Scientific Computing Commons