Discriminative feature extraction for video person re-identification via multi-task network

作者:Wanru Song, Jieying Zheng, Yahong Wu, Changhong Chen, Feng Liu

摘要

The goal of video-based person re-identification is to match different pedestrians in various image sequences across non-overlapping cameras. A critical issue of this task is how to exploit the useful information provided by videos. To solve this problem, we propose a novel feature learning framework for video-based person re-identification. The proposed method aims at capturing the most significant information in the spatial and temporal domains and then building a discriminative and robust feature representation for each sequence. More specifically, to learn more effective frame-wise features, we apply several attributes to the video-based task and build a multi-task network for the identity and attribute classifications. In the training phase, we present a multi-loss function to minimize intra-class variances and maximize inter-class differences. After that, the feature aggregation network is employed to aggregate frame-wise features and extract the temporal information from the video. Furthermore, considering that adjacent frames typically have similar appearance features, we propose the concept of “non-redundant appearance feature extraction” to obtain the sequence-level appearance descriptors of pedestrians. Based on the complementarity between the temporal feature and the non-redundant appearance feature, we combine them in the distance learning phase by assigning them different distance-weighted coefficients. Extensive experiments are conducted on three video-based datasets and the results demonstrate the superiority and effectiveness of our method.

论文关键词:Attribute, Center loss, Feature representation, Person re-identification, Video

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-020-01844-8