Deep generative image priors for semantic face manipulation

Xianxu Hou, Linlin Shen, Zhong Ming, Guoping Qiu

Research output: Journal PublicationArticlepeer-review

2 Citations (Scopus)


Previous works on generative adversarial networks (GANs) mainly focus on how to synthesize high-fidelity images. In this paper, we present a framework to leverage the knowledge learned by GANs for semantic face manipulation. In particular, we propose to control the semantics of synthesized faces by adapting the latent codes with an attribute prediction model. Moreover, in order to achieve a more accurate estimation of different facial attributes, we propose to pretrain the attribute prediction model by inverting the synthesized face images back to the GAN latent space. As a result, our method explicitly considers the semantics encoded in the latent space of a pretrained GAN and is able to faithfully edit various attributes like eyeglasses, smiling, bald, age, mustache and gender for high-resolution face images. Extensive experiments show that our method has superior performance compared to state of the art for both face attribute prediction and semantic face manipulation.

Original languageEnglish
Article number109477
JournalPattern Recognition
Publication statusPublished - Jul 2023
Externally publishedYes


  • Face attribute prediction
  • GANs
  • Semantic face manipulation

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Deep generative image priors for semantic face manipulation'. Together they form a unique fingerprint.

Cite this