Abstract
This paper looks at this intriguing question: are single images with their details lost during deraining, reversible to their artifact-free status? We propose an end-to-end detail-recovery image deraining network (termed a DRDNet) to solve the problem. Unlike existing image deraining approaches that attempt to meet the conflicting goal of simultaneously deraining and preserving details in a unified framework, we propose to view rain removal and detail recovery as two seperate tasks, so that each part could specialize rather than trade-off between two conflicting goals. Specifically, we introduce two parallel sub-networks with a comprehensive loss function which synergize to derain and recover the lost details caused by deraining. For complete rain removal, we present a rain residual network with the squeeze-and-excitation (SE) operation to remove rain streaks from the rainy images. For detail recovery, we construct a specialized detail repair network consisting of welldesigned blocks, named structure detail context aggregation block (SDCAB), to encourage the lost details to return for eliminating image degradations. Moreover, the detail recovery branch of our proposed detail repair framework is detachable and can be incorporated into existing deraining methods to boost their performances. DRD-Net has been validated on several well-known benchmark datasets in terms of deraining robustness and detail accuracy. Comparisons show clear visual and numerical improvements of our method over the state-of-the-arts.
Original language | English |
---|---|
Title of host publication | Proceedings of the Conference on Computer Vision and Pattern Recognition 2020 (CVPR 2020) |
Pages | 14560-14569 |
Number of pages | 10 |
DOIs | |
Publication status | Published - 14 Jun 2020 |
Event | The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 - Online Duration: 14 Jun 2020 → 19 Jun 2020 http://cvpr2020.thecvf.com/ |
Public Lecture
Public Lecture | The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 |
---|---|
Abbreviated title | CVPR2020 |
Period | 14/06/20 → 19/06/20 |
Internet address |
Funding
This work was supported by the National Natural Science Foundation of China (No. 61502137, No. 61772267), the HKIBS Research Seed Fund 2019/20 (No. 190-009), the Research Seed Fund (No. 102367) of Lingnan University, Hong Kong, the Fundamental Research Funds for the Central Universities (No. NE2016004), and the Natural Science Foundation of Jiangsu Province (No. BK20190016).