Interactive nonlocal joint learning network for red, green, blue plus depth salient object detection

Peng LI, Zhilei CHEN, Haoran XIE, Mingqiang WEI, Fu Lee WANG, Gary CHENG*

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract


Research into red, green, blue plus depth salient object detection (SOD) has identified the challenging problem of how to exploit raw depth features and fuse cross-modal (CM) information. To solve this problem, we propose an interactive nonlocal joint learning (INL-JL) network for quality RGB-D SOD. INL-JL benefits from three key components. First, we carry out joint learning to extract common features from RGB and depth images. Second, we adopt simple yet effective CM fusion blocks in lower levels while leveraging the proposed INL blocks in higher levels, aiming to purify the depth features and to make CM fusion more efficient. Third, we utilize a dense multiscale transfer strategy to infer saliency maps. INL-JL advances the state-of-the-art methods on five public datasets, demonstrating its power to promote the quality of RGB-D SOD.
Original languageEnglish
Article number063040
JournalJournal of Electronic Imaging
Volume31
Issue number6
Early online date2 Dec 2022
DOIs
Publication statusPublished - 2 Dec 2022

Bibliographical note

© 2022 SPIE and IS&T

Fingerprint

Dive into the research topics of 'Interactive nonlocal joint learning network for red, green, blue plus depth salient object detection'. Together they form a unique fingerprint.

Cite this