Motion deblurring is a challenging task in vision and graphics. Recent researches aim to deblur by using multiple sub-networks with multi-scale or multi-patch inputs. However, scaling or splitting operations on input images inevitably loses the spatial details of the images. Meanwhile, their models are usually complex and computationally expensive. To address these problems, we propose a novel variant-depth scheme. In particular, we utilize the multiple variant-depth sub-networks with scale-invariant inputs to combine into a variant-depth network (VDN). In our design, different levels of sub-networks accomplish progressive deblurring effects without transforming the inputs, thereby effectively reducing the computational complexity of the model. Extensive experiments have shown that our VDN outperforms the state-of-the-art motion deblurring methods while maintaining a lower computational cost. The source code is publicly available at: https://github.com/CaiGuoHS/VDN.
Bibliographical noteFunding Information:
information Hong Kong Polytechnic University, Grant/Award Numbers: P0030419; P0030929; P0035358; Lingnan University, Grant/Award Number: HKIBS RSF-212-004This work was supported in part by The Hong Kong Polytechnic University under Grant P0030419, Grant P0030929, and Grant P0035358, and in part by the Hong Kong Institute of Business Studies Research Seed Fund under Grant HKIBS RSF-212-004.
This work was supported in part by The Hong Kong Polytechnic University under Grant P0030419, Grant P0030929, and Grant P0035358, and in part by the Hong Kong Institute of Business Studies Research Seed Fund under Grant HKIBS RSF‐212‐004.
© 2022 John Wiley & Sons, Ltd.
- motion deblurring
- scale-invariant input
- variant-depth network