3D-GAT: 3D-Guided adversarial transform network for person re-identification in unseen domains
作者:
Highlights:
•
摘要
Person Re-identification (ReID) has witnessed remarkable improvements in the past couple of years. However, its applications in real-world scenarios are limited by the disparity among different cameras and datasets. In general, it remains challenging to generalize ReID algorithms from one domain to another, especially when the target domain is unknown. To solve this issue, we develop a 3D-guided adversarial transform (3D-GAT) network which explores the transfer ability of source training data to facilitate learning domain-independent knowledge. Being aware of a 3D model and human poses, 3D-GAT makes use of image-to-image translation to synthesize person images in different conditions whilst preserving features for identification as much as possible. With these augmented training data, it is easier for ReID approaches to perceive how a person can appear differently under varying viewpoints and poses, most of which are not seen in the training data, and thus achieve higher ReID accuracy especially in an unknown domain. Extensive experiments conducted on Market-1501, DukeMTMC-reID and CUHK03 demonstrate the effectiveness of our proposed approach, which is competitive to the baseline models in the original dataset and sets the new state-of-the-art in direct transfer to other datasets.
论文关键词:Person re-identification,Domain transfer,3D Models,Training with synthesized image data
论文评审过程:Received 12 October 2019, Revised 11 October 2020, Accepted 14 December 2020, Available online 25 December 2020, Version of Record 30 December 2020.
论文官网地址:https://doi.org/10.1016/j.patcog.2020.107799