In this paper, we propose a intensity guided CNN (IG-Net) model, which learns an end-to-end mapping between the intensity image and distorted depth map to the uncompressed depth map. To eliminate the undesired blocking artifacts such as discontinuities around object boundary, two branches are designed to extract the high-frequency information from intensity image and depth map, respectively. Multi-scale feature fusion and enhancement layers are introduced in the main branch to strength the edge information of the restored depth map. Performance evaluation on JPEG compression artifacts shows the effectiveness and superiority of our proposed model compared with state-of-the-art methods.
|Title of host publication||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Publication status||Published - 2018|
Bibliographical noteThis work was supported in part by the National Natural Science Foundation of China under Grant 61501299, 61471348, 61672443 and 61620106008, in part by the Guangdong Nature Science Foundation under Grant 2016A030310058, in part by the Shenzhen Emerging Industries of the Strategic Basic Research Project under Grants JCYJ20150525092941043, JCYJ20160226191842793, in part by the Project 2016049 supported by SZU R/D Fund, and in part by the Tencent “Rhinoceros Birds”-Scientific Research Foundation for Young Teachers of Shenzhen University.
- Compression artifacts
- Convolutional neural network
- JPEG compression