Video saliency detection via sparsity-based reconstruction and propagation

Runmin CONG, Jianjun LEI*, Huazhu FU, Fatih PORIKLI, Qingming HUANG, Chunping HOU

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

83 Citations (Scopus)

Abstract

Video saliency detection aims to continuously discover the motion-related salient objects from the video sequences. Since it needs to consider the spatial and temporal constraints jointly, video saliency detection is more challenging than image saliency detection. In this paper, we propose a new method to detect the salient objects in video based on sparse reconstruction and propagation. With the assistance of novel static and motion priors, a single-frame saliency model is first designed to represent the spatial saliency in each individual frame via the sparsity-based reconstruction. Then, through a progressive sparsity-based propagation, the sequential correspondence in the temporal space is captured to produce the inter-frame saliency map. Finally, these two maps are incorporated into a global optimization model to achieve spatio-temporal smoothness and global consistency of the salient object in the whole video. The experiments on three large-scale video saliency datasets demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.

Original languageEnglish
Article number8704996
Pages (from-to)4819-4831
Number of pages13
JournalIEEE Transactions on Image Processing
Volume28
Issue number10
Early online date2 May 2019
DOIs
Publication statusPublished - Oct 2019
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1992-2012 IEEE.

Keywords

  • color and motion prior
  • forward-backward propagation
  • global optimization
  • sparse reconstruction
  • Video saliency detection

Fingerprint

Dive into the research topics of 'Video saliency detection via sparsity-based reconstruction and propagation'. Together they form a unique fingerprint.

Cite this