Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2017

Abstract

This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG^2 utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128x64 re-identification images and 256x256 fashion photos show that our model generates high-quality person images with convincing details.

Keywords

Person image generation, pose estimation, generative adversarial networks

Discipline

Artificial Intelligence and Robotics | Computer Sciences | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

Advances in Neural Information Processing Systems 31 (NIPS 2017)

First Page

406

Last Page

416

City or Country

Long Beach, CA

Copyright Owner and License

Authors

Additional URL

http://papers.nips.cc/paper/6644-pose-guided-person-image-generation

Share

COinS