Segmentation mask-guided person image generation
作者:Meichen Liu, Xin Yan, Chenhui Wang, Kejun Wang
摘要
Background clutters and pose variation are the key factors which prevents the network from learning a robust Person re-identification (Re-ID) model. To address the problem above, we first introduce the binary segmentation mask to construct the body region served as the input of the generator, then design a segmentation mask-guided person image generation network for the pose transfer. The binary segmentation mask has the capability of removing the background clutters in pixel-level, and contains more details about the edge information, where better shape consistency can be achieved for the generated image with the input image. Compared with the previous methods, the proposed method can dramatically improve the model adaptive ability and deal with the diversity of postures. In addition, we design a lightweight attention mechanism module as a guider module, which can assist the generator to focus on the discriminative features of pedestrians. The experiment results are introduced to demonstrate the effectiveness of the proposed method and the superiority performance over most state-of-the-art methods without over-computing in the design process of the Re-ID model. It is worth mentioning that our ideas can be easily combined with other fields to solve the phenomenon of the current situation with insufficient pose variations in the datasets.
论文关键词:Pose transferrable, Segmentation mask, Generative adversarial networks, Person re-identification
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-020-01907-w