Abstract
In the visual object tracking task, the existing trackers cannot well solve the appearance of deformation, occlusion, and similar object interference, etc. To address these problems, this article proposes a new Anchor-free Tracker based on Space-time Memory Network (ATSMN). In this work, we innovatively use the space-time memory network, memory feature fusion network, and transformer feature cross fusion network. Through the synergy of above-mentioned innovations, trackers can make full use of temporal context information in the memory frames related to the object and better adapt to the appearance change of the object, which can obtain accurate classification and regression results. Extensive experimental results on challenging benchmarks show that ATSMN can achieve the SOTA level tracking performance compared with other advanced trackers.
Original language | English |
---|---|
Pages (from-to) | 73-83 |
Number of pages | 11 |
Journal | IEEE Multimedia |
Volume | 30 |
Issue number | 1 |
Early online date | 23 Sept 2022 |
DOIs | |
Publication status | Published - Mar 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1994-2012 IEEE.
Funding
This work was supported in part by the Natural Science Foundation of China NSFC under Grants 61871445 and 61302156, and in part by Key R and D Foundation Project of Jiangsu Province under Grant BE2016001-4
Keywords
- Anchor-free
- Feature cross fusion
- Object tracking
- Space-time memory network