Appearance and shape based image synthesis by conditional variational generative adversarial network

作者:

Highlights:

摘要

Person image synthesis based on shape and appearance using deep generative models opens the door in mickle applications, such as person re-identification (ReID) and movie industry. The methods of image synthesis are driven by producing the image of an object directly, which fail to recover spatial deformations when images are generated. In this paper, we present a conditional variational generative adversarial network (CVGAN) to synthesize desired images guided by target shape by modeling the inherent interplay between shape and appearance. Firstly, the shape and appearance of the given images are disentangled by adopting variational inference, which enables us to generate person images with arbitrary shapes. Secondly, to preserve the details and generate photo-realistic images, the Kullback–Leibler (KL) loss is adopted to reduce the gap between the condition image and generated image. Thirdly, to prevent partly gradient vanishing problem for training our framework stably, we propose combined general learning method, where the discriminative network leverages least squares loss. In addition, we experiment on COCO, DeepFashion and Market-1501 datasets, and results demonstrate that VGAN significantly improves the synthesis of images on discriminability, diversity and quality over the existing methods.

论文关键词:Image synthesis,Deep generative models,Variational inference,Generative adversarial network

论文评审过程:Received 23 June 2019, Revised 8 November 2019, Accepted 26 December 2019, Available online 31 December 2019, Version of Record 7 March 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2019.105450