TSEV-GAN: Generative Adversarial Networks with Target-aware Style Encoding and Verification for facial makeup transfer

作者:

Highlights:

摘要

Generative Adversarial Networks (GANs) have brought great progress in image-to-image translation. The problem that we focus on is how to accurately extract and transfer the makeup style from a reference facial image to a target face. We propose a GAN-based generative model with Target-aware makeup Style Encoding and Verification, which is referred to as TSEV-GAN. This design is due to the following two insights: (a) When directly encoding the reference image, the encoder may focus on regions which are not necessarily important or desirable. To precisely capture the style, we encode the difference map between the reference and corresponding de-makeup images, and then inject the obtained style code into a generator. (b) A generic real-fake discriminator cannot ensure the correctness of the rendered makeup pattern. In view of this, we impose style representation learning on a conditional discriminator. By identifying style consistency between the reference and synthesized images, the generator is induced to precisely replicate the desirable makeup. We perform extensive experiments on the existing makeup benchmarks to verify the effectiveness of our improvement strategies in transferring a variety of makeup styles. Moreover, the proposed model is able to outperform other existing state-of-the-art makeup transfer methods in terms of makeup similarity and irrelevant content preservation.

论文关键词:Generative Adversarial Networks,Makeup transfer,Style verification,Image translation

论文评审过程:Received 1 May 2022, Revised 27 September 2022, Accepted 27 September 2022, Available online 4 October 2022, Version of Record 13 October 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109958