Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2021
Abstract
Transferring makeup from the misaligned reference image is challenging. Previous methods overcome this barrier by computing pixel-wise correspondences between two images, which is inaccurate and computational-expensive. In this paper, we take a different perspective to break down the makeup transfer problem into a two-step extraction-assignment process. To this end, we propose a Style-based Controllable GAN model that consists of three components, each of which corresponds to target style-code encoding, face identity features extraction, and makeup fusion, respectively. In particular, a Part-specific Style Encoder encodes the component-wise makeup style of the reference image into a style-code in an intermediate latent space W. The style-code discards spatial information and therefore is invariant to spatial misalignment. On the other hand, the style-code embeds component-wise information, enabling flexible partial makeup editing from multiple references. This style-code, together with source identity features, is integrated into a Makeup Fusion Decoder equipped with multiple AdaIN layers to generate the final result. Our proposed method demonstrates great flexibility on makeup transfer by supporting makeup removal, shade-controllable makeup transfer, and part-specific makeup transfer, even with large spatial misalignment. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods.
Keywords
Break down, Component wise, Features extraction, Reference image, Spatial informations, Spatial misalignments, Spatially invariants, Three-component, Transfer problems, Two-step extraction
Discipline
Databases and Information Systems
Research Areas
Information Systems and Management
Publication
Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, Online, June 19-25
First Page
6545
Last Page
6553
ISBN
9781665445092
Identifier
10.1109/CVPR46437.2021.00648
Publisher
IEEE
City or Country
New Jersey
Citation
DENG, Han; HAN, Chu; CAI, Hongmin; HAN, Guoqiang; and HE, Shengfeng.
Spatially-invariant style-codes controlled makeup transfer. (2021). Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, Online, June 19-25. 6545-6553.
Available at: https://ink.library.smu.edu.sg/sis_research/8526
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/CVPR46437.2021.00648