STURE: Spatial–Temporal Mutual Representation Learning for robust data association in online multi-object tracking

作者:

Highlights:

摘要

Online multi-object tracking (MOT) is a longstanding task for computer vision and intelligent vehicle platform. At present, the main paradigm is tracking-by-detection, and the main difficulty of this paradigm is how to associate current candidate detections with historical tracklets. However, in the MOT scenarios, each historical tracklet is composed of an object sequence, while each candidate detection is just a flat image, which lacks temporal features of the object sequence. The feature difference between current candidate detections and historical tracklets makes the object association much harder. Therefore, we propose a Spatial–Temporal Mutual Representation Learning (STURE) approach which learns spatial–temporal representations between current candidate detections and historical sequences in a mutual representation space. For historical tracklets, the detection learning network is forced to match the representations of sequence learning network in a mutual representation space. The proposed approach is capable of extracting more distinguishing detection and sequence representations by using various designed losses in object association. As a result, spatial–temporal feature is learned mutually to reinforce the current detection features, and the feature difference can be relieved. To prove the robustness of the STURE, it is applied to the public MOT challenge benchmarks and performs well compared with various state-of-the-art online MOT trackers based on identity-preserving metrics.

论文关键词:

论文评审过程:Received 26 July 2021, Revised 28 March 2022, Accepted 13 April 2022, Available online 21 April 2022, Version of Record 16 May 2022.

论文官网地址:https://doi.org/10.1016/j.cviu.2022.103433