Deep intensity guidance based compression artifacts reduction for depth map

Xu WANG, Pingping ZHANG, Yun ZHANG, Lin MA, Sam KWONG, Jianmin JIANG

Research output: Journal PublicationsJournal Article (refereed)peer-review

9 Citations (Scopus)

Abstract

In this paper, we propose an deep intensity guidance based compression artifacts reduction model (denoted as DIG-Net) for depth map. The proposed DIG-Net model can learn an end-to-end mapping from the color image and distorted depth map to the uncompressed depth map. To eliminate undesired artifacts such as discontinuities around object boundary, the proposed model is with three branches, which extracts the high frequency information from color image and depth maps as priors. Based on the modified edge preserving loss function, the deep multi-scale guidance information are learned and fused in the model to make the edge of depth map sharper. Experimental results show the effectiveness and superiority of our proposed model compared with the state-of-the-art methods.
Original languageEnglish
Pages (from-to)234-242
JournalJournal of Visual Communication and Image Representation
Volume57
Early online date7 Nov 2018
DOIs
Publication statusPublished - Nov 2018
Externally publishedYes

Bibliographical note

This work was supported in part by the National Natural Science Foundation of China under Grant 31670553, 61871270, 61501299, 61672443 and 61620106008, in part by the Guangdong Nature Science Foundation under Grant 2016A030310058, in part by the Shenzhen Emerging Industries of the Strategic Basic Research Project under Grants JCYJ20160226191842793, in part by the Natural Science Foundation of SZU (Grant No. 827000144 ), and in part by the Tencent “Rhinoceros Birds”-Scientific Research Foundation for Young Teachers of Shenzhen University.

Keywords

  • Compression artifacts reduction
  • Convolutional neural network
  • Depth map
  • JPEG compression

Fingerprint

Dive into the research topics of 'Deep intensity guidance based compression artifacts reduction for depth map'. Together they form a unique fingerprint.

Cite this