Anchor-Free Tracker Based on Space-Time Memory Network

Guang HAN, Chen CAO, Jixin LIU, Sam KWONG

Research output: Journal PublicationsJournal Article (refereed)peer-review

3 Citations (Scopus)

Abstract

In the visual object tracking task, the existing trackers cannot well solve the appearance of deformation, occlusion, and similar object interference, etc. To address these problems, this article proposes a new Anchor-free Tracker based on Space-time Memory Network (ATSMN). In this work, we innovatively use the space-time memory network, memory feature fusion network, and transformer feature cross fusion network. Through the synergy of above-mentioned innovations, trackers can make full use of temporal context information in the memory frames related to the object and better adapt to the appearance change of the object, which can obtain accurate classification and regression results. Extensive experimental results on challenging benchmarks show that ATSMN can achieve the SOTA level tracking performance compared with other advanced trackers.

Original languageEnglish
Pages (from-to)73-83
Number of pages11
JournalIEEE Multimedia
Volume30
Issue number1
Early online date23 Sept 2022
DOIs
Publication statusPublished - Mar 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1994-2012 IEEE.

Funding

This work was supported in part by the Natural Science Foundation of China NSFC under Grants 61871445 and 61302156, and in part by Key R and D Foundation Project of Jiangsu Province under Grant BE2016001-4

Keywords

  • Anchor-free
  • Feature cross fusion
  • Object tracking
  • Space-time memory network

Fingerprint

Dive into the research topics of 'Anchor-Free Tracker Based on Space-Time Memory Network'. Together they form a unique fingerprint.

Cite this