Person re-identification based on multi-scale feature learning
作者:
Highlights:
•
摘要
Extracting discriminative pedestrian features is an effective method in person re-identification. Most person re-identification works focus on extracting abstract features from the high-layer of the network, but ignore the middle-layer features, thus reducing the identity accuracy. To solve this problem, we construct a Smooth Aggregation Module (SAM) to extract, align, and fuse the feature maps in the middle-layer of the network to make up for the lack of detailed information in the high-level network features, and propose an Omni-Scale Feature Aggregation method (OSFA)1 to jointly learn the abstract features and local detail features. Considering that the intra-class distance in person re-identification should be less than the inter-class distance, we combine multiple losses to constrain the model. We evaluate the performance of our method on three standard benchmark datasets: Market-1501, CUHK03 (both detected and labeled) and DukeMTMC-reID, and experimental results show that our method is superior to the state-of-the-art approaches.
论文关键词:Person re-identification,Multi-scale,Representation learning,Feature fusion
论文评审过程:Received 12 March 2021, Revised 29 June 2021, Accepted 30 June 2021, Available online 8 July 2021, Version of Record 8 July 2021.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.107281