Cross-modality disentanglement and shared feedback learning for infrared-visible person re-identification
作者:
Highlights:
• The pairing strategy of cross-modality images disentanglement network (CMIDN) reduces the feature distribution distance between the infrared set and the visible set. The pairing strategy reduces the feature distribution distance between the infrared and visible sets.
• A dual-path shared module is proposed to mine the middle-level discriminative information.
• The feedback scoring module provides a strong feedback signal to optimize the model parameters.
• The proposed CMIDN and DSFLN sub-networks achieve modality-level alignment and feature level alignment in an end-to-end manner. The proposed CMIDN and DSFLN sub-networks achieve modality-level and feature-level alignment in an end-to-end manner.
摘要
•The pairing strategy of cross-modality images disentanglement network (CMIDN) reduces the feature distribution distance between the infrared set and the visible set. The pairing strategy reduces the feature distribution distance between the infrared and visible sets.•A dual-path shared module is proposed to mine the middle-level discriminative information.•The feedback scoring module provides a strong feedback signal to optimize the model parameters.•The proposed CMIDN and DSFLN sub-networks achieve modality-level alignment and feature level alignment in an end-to-end manner. The proposed CMIDN and DSFLN sub-networks achieve modality-level and feature-level alignment in an end-to-end manner.
论文关键词:Cross-modality person re-identification,Generation adversarial network,Joint learning framework,Shared feedback
论文评审过程:Received 2 March 2022, Revised 23 June 2022, Accepted 24 June 2022, Available online 28 June 2022, Version of Record 9 July 2022.
论文官网地址:https://doi.org/10.1016/j.knosys.2022.109337