Unpaired image to image transformation via informative coupled generative adversarial networks

作者:Hongwei Ge, Yuxuan Han, Wenjing Kang, Liang Sun

摘要

We consider image transformation problems, and the objective is to translate images from a source domain to a target one. The problem is challenging since it is difficult to preserve the key properties of the source images, and to make the details of target being as distinguishable as possible. To solve this problem, we propose an informative coupled generative adversarial networks (ICoGAN). For each domain, an adversarial generator-and-discriminator network is constructed. Basically, we make an approximately-shared latent space assumption by a mutual information mechanism, which enables the algorithm to learn representations of both domains in unsupervised setting, and to transform the key properties of images from source to target. Moreover, to further enhance the performance, a weight-sharing constraint between two subnetworks, and different level perceptual losses extracted from the intermediate layers of the networks are combined. With quantitative and visual results presented on the tasks of edge to photo transformation, face attribute transfer, and image inpainting, we demonstrate the ICo-GAN’s effectiveness, as compared with other state-of-the-art algorithms.

论文关键词:generative adversarial networks, image transformation, mutual information, perceptual loss

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11704-020-9002-7