Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2022

Abstract

We present a novel high-resolution face swapping method using the inherent prior knowledge of a pre-trained GAN model. Although previous research can leverage generative priors to produce high-resolution results, their quality can suffer from the entangled semantics of the latent space. We explicitly disentangle the latent semantics by utilizing the progressive nature of the generator, deriving structure at-tributes from the shallow layers and appearance attributes from the deeper ones. Identity and pose information within the structure attributes are further separated by introducing a landmark-driven structure transfer latent direction. The disentangled latent code produces rich generative features that incorporate feature blending to produce a plausible swapping result. We further extend our method to video face swapping by enforcing two spatio-temporal constraints on the latent space and the image space. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art image/video face swapping methods in terms of hallucination quality and consistency. Code can be found at: https://github.com/cnnlstm/FSLSD_HiRes.

Keywords

Image and video synthesis and generation, Face and gestures, Low-level vision, Computer vision, Codes, Face recognition, Semantics, Generators

Discipline

Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces

Research Areas

Software and Cyber-Physical Systems

Publication

Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

First Page

7632

Last Page

7641

ISBN

9781665469470

Identifier

10.1109/CVPR52688.2022.00749

Publisher

IEEE Computer Society

City or Country

New York, NY, USA

Additional URL

https://doi.org/10.1109/CVPR52688.2022.00749

Share

COinS