Abstract
In this paper, we propose an deep intensity guidance based compression artifacts reduction model (denoted as DIG-Net) for depth map. The proposed DIG-Net model can learn an end-to-end mapping from the color image and distorted depth map to the uncompressed depth map. To eliminate undesired artifacts such as discontinuities around object boundary, the proposed model is with three branches, which extracts the high frequency information from color image and depth maps as priors. Based on the modified edge preserving loss function, the deep multi-scale guidance information are learned and fused in the model to make the edge of depth map sharper. Experimental results show the effectiveness and superiority of our proposed model compared with the state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 234-242 |
Number of pages | 9 |
Journal | Journal of Visual Communication and Image Representation |
Volume | 57 |
Early online date | 7 Nov 2018 |
DOIs | |
Publication status | Published - Nov 2018 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2018 Elsevier Inc.
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant 31670553 , 61871270 , 61501299 , 61672443 and 61620106008 , in part by the Guangdong Nature Science Foundation under Grant 2016A030310058 , in part by the Shenzhen Emerging Industries of the Strategic Basic Research Project under Grants JCYJ20160226191842793 , in part by the Natural Science Foundation of SZU (Grant No. 827000144), and in part by the Tencent “Rhinoceros Birds”-Scientific Research Foundation for Young Teachers of Shenzhen University.
Keywords
- Compression artifacts reduction
- Convolutional neural network
- Depth map
- JPEG compression