TY - GEN
T1 - Detail-recovery Image Deraining via Context Aggregation Networks
AU - DENG, Sen
AU - WEI, Mingqiang
AU - WANG, Jun
AU - FENG, Yidan
AU - XIE, Haoran
AU - LIANG, Luming
AU - WANG, Fu Lee
AU - WANG, Meng
N1 - This work was supported by the National Natural Science Foundation of China (No. 61502137, No. 61772267), the HKIBS Research Seed Fund 2019/20 (No. 190-009), the Research Seed Fund (No. 102367) of Lingnan University, Hong Kong, the Fundamental Research Funds for the Central Universities (No. NE2016004), and the Natural Science Foundation of Jiangsu Province (No. BK20190016).
PY - 2020/6/14
Y1 - 2020/6/14
N2 - This paper looks at this intriguing question: are single images with their details lost during deraining, reversible to their artifact-free status? We propose an end-to-end detail-recovery image deraining network (termed a DRDNet) to solve the problem. Unlike existing image deraining approaches that attempt to meet the conflicting goal of simultaneously deraining and preserving details in a unified framework, we propose to view rain removal and detail recovery as two seperate tasks, so that each part could specialize rather than trade-off between two conflicting goals. Specifically, we introduce two parallel sub-networks with a comprehensive loss function which synergize to derain and recover the lost details caused by deraining. For complete rain removal, we present a rain residual network with the squeeze-and-excitation (SE) operation to remove rain streaks from the rainy images. For detail recovery, we construct a specialized detail repair network consisting of welldesigned blocks, named structure detail context aggregation block (SDCAB), to encourage the lost details to return for eliminating image degradations. Moreover, the detail recovery branch of our proposed detail repair framework is detachable and can be incorporated into existing deraining methods to boost their performances. DRD-Net has been validated on several well-known benchmark datasets in terms of deraining robustness and detail accuracy. Comparisons show clear visual and numerical improvements of our method over the state-of-the-arts.
AB - This paper looks at this intriguing question: are single images with their details lost during deraining, reversible to their artifact-free status? We propose an end-to-end detail-recovery image deraining network (termed a DRDNet) to solve the problem. Unlike existing image deraining approaches that attempt to meet the conflicting goal of simultaneously deraining and preserving details in a unified framework, we propose to view rain removal and detail recovery as two seperate tasks, so that each part could specialize rather than trade-off between two conflicting goals. Specifically, we introduce two parallel sub-networks with a comprehensive loss function which synergize to derain and recover the lost details caused by deraining. For complete rain removal, we present a rain residual network with the squeeze-and-excitation (SE) operation to remove rain streaks from the rainy images. For detail recovery, we construct a specialized detail repair network consisting of welldesigned blocks, named structure detail context aggregation block (SDCAB), to encourage the lost details to return for eliminating image degradations. Moreover, the detail recovery branch of our proposed detail repair framework is detachable and can be incorporated into existing deraining methods to boost their performances. DRD-Net has been validated on several well-known benchmark datasets in terms of deraining robustness and detail accuracy. Comparisons show clear visual and numerical improvements of our method over the state-of-the-arts.
UR - https://openaccess.thecvf.com/content_CVPR_2020/html/Deng_Detail-recovery_Image_Deraining_via_Context_Aggregation_Networks_CVPR_2020_paper.html
UR - http://www.scopus.com/inward/record.url?scp=85094868619&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.01457
DO - 10.1109/CVPR42600.2020.01457
M3 - Conference paper (refereed)
SP - 14560
EP - 14569
BT - Proceedings of the Conference on Computer Vision and Pattern Recognition 2020 (CVPR 2020)
T2 - The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020
Y2 - 14 June 2020 through 19 June 2020
ER -