Image generation and constrained two-stage feature fusion for person re-identification

作者:Tao Zhang, Xing Sun, Xuan Li, Zhengming Yi

摘要

Generative adversarial network is widely used in person re-identification to expand data by generating auxiliary data. However, researchers all believe that using too much generated data in the training phase will reduce the accuracy of re-identification models. In this study, an improved generator and a constrained two-stage fusion network are proposed. A novel gesture discriminator embedded into the generator is used to calculate the completeness of skeleton pose images. The improved generator can make generated images more realistic, which would be conducive to feature extraction. The role of the constrained two-stage fusion network is to extract and utilize the real information of the generated images for person re-identification. Unlike previous studies, the fusion of shallow features is considered in this work. In detail, the proposed network has two branches based on the structure of ResNet50. One branch is for the fusion of images that are generated by the generated adversarial network, the other is applied to fuse the result of the first fusion and the original image. Experimental results show that our method outperforms most existing similar methods on Market-1501 and DukeMTMC-reID.

论文关键词:Person re-identification, Generative adversarial network, Shallow features fusion

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-02271-z