Going From RGB to RGBD Saliency : A Depth-Guided Transformation Model

Runmin CONG, Jianjun LEI, Huazhu FU, Junhui HOU, Qingming HUANG, Sam KWONG

Research output: Journal PublicationsJournal Article (refereed)peer-review

135 Citations (Scopus)


Depth information has been demonstrated to be useful for saliency detection. However, the existing methods for RGBD saliency detection mainly focus on designing straightforward and comprehensive models, while ignoring the transferable ability of the existing RGB saliency detection models. In this article, we propose a novel depth-guided transformation model (DTM) going from RGB saliency to RGBD saliency. The proposed model includes three components, that is: 1) multilevel RGBD saliency initialization; 2) depth-guided saliency refinement; and 3) saliency optimization with depth constraints. The explicit depth feature is first utilized in the multilevel RGBD saliency model to initialize the RGBD saliency by combining the global compactness saliency cue and local geodesic saliency cue. The depth-guided saliency refinement is used to further highlight the salient objects and suppress the background regions by introducing the prior depth domain knowledge and prior refined depth shape. Benefiting from the consistency of the entire object in the depth map, we formulate an optimization model to attain more consistent and accurate saliency results via an energy function, which integrates the unary data term, color smooth term, and depth consistency term. Experiments on three public RGBD saliency detection benchmarks demonstrate the effectiveness and performance improvement of the proposed DTM from RGB to RGBD saliency.
Original languageEnglish
Pages (from-to)3627-3639
Number of pages13
JournalIEEE Transactions on Cybernetics
Issue number8
Early online date20 Aug 2019
Publication statusPublished - Aug 2020
Externally publishedYes

Bibliographical note

This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFB1002900, in part by the Fundamental Research Funds for the Central Universities under Grant 2019RC039, in part by the National Natural Science Foundation of China under Grant 61722112, Grant 61520106002, Grant 61731003, Grant 61836002, Grant 61620106009, Grant U1636214, Grant 61873142, Grant 61772344, and Grant 61672443, in part by the Key Research Program of Frontier Sciences, CAS under Grant QYZDJ-SSW-SYS013, in part by Hong Kong Research Grants Council (RGC) General Research Funds under Grant 9042038 (CityU 11205314) and Grant 9042322 (CityU 11200116), and in part by Hong Kong RGC Early Career Schemes under Grant 9048123.


  • Depth cue
  • energy function optimization
  • refined depth shape prior (RDSP)
  • RGBD images
  • saliency detection
  • transformation model


Dive into the research topics of 'Going From RGB to RGBD Saliency : A Depth-Guided Transformation Model'. Together they form a unique fingerprint.

Cite this