Beyond modality alignment: Learning part-level representation for visible-infrared person re-identification

作者:

Highlights:

摘要

Visible-Infrared person re-IDentification (VI-reID) aims to automatically retrieve the pedestrian of interest exposed to sensors in different modalities, such as visible camera v.s. infrared sensor. It struggles to learn both modality-invariant and discriminant representations. Unfortunately, existing VI-reID work mainly focuses on tackling the modality difference, which fine-grained level discriminant information has not been well investigated. This causes inferior identification performance. To address the problem, we propose a Dual-Alignment Part-aware Representation (DAPR) framework to simultaneously alleviate the modality bias and mine different level of discriminant representations. Particularly, our DAPR reduces modality discrepancy of high-level features hierarchically by back-propagating reversal gradients from a modality classifier, in order to learn a modality-invariant feature space. And meanwhile, multiple heads of classifiers with the improved part-aware BNNeck are integrated to supervise the network producing identity-discriminant representations w.r.t. both local details and global structures in the learned modality-invariant space. By training in an end-to-end manner, the proposed DAPR produces camera-modality-invariant yet discriminant features1 for the purpose of person matching across modalities. Extensive experiments are conducted on two benchmarks, i.e., SYSU MM01 and RegDB, and the results demonstrate the effectiveness of our proposed method.

论文关键词:Visible-infrared person re-identification,Modality alignment,Part-aware feature learning,Hierarchical modality discriminator

论文评审过程:Received 8 January 2021, Accepted 28 January 2021, Available online 4 February 2021, Version of Record 16 February 2021.

论文官网地址:https://doi.org/10.1016/j.imavis.2021.104118