VDN: Variant‐depth network for motion deblurring

Cai GUO, Qian WANG, Hong-Ning DAI*, Ping LI

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Motion deblurring is a challenging task in vision and graphics. Recent researches aim to deblur by using multiple sub-networks with multi-scale or multi-patch inputs. However, scaling or splitting operations on input images inevitably loses the spatial details of the images. Meanwhile, their models are usually complex and computationally expensive. To address these problems, we propose a novel variant-depth scheme. In particular, we utilize the multiple variant-depth sub-networks with scale-invariant inputs to combine into a variant-depth network (VDN). In our design, different levels of sub-networks accomplish progressive deblurring effects without transforming the inputs, thereby effectively reducing the computational complexity of the model. Extensive experiments have shown that our VDN outperforms the state-of-the-art motion deblurring methods while maintaining a lower computational cost. The source code is publicly available at: https://github.com/CaiGuoHS/VDN.
Original languageEnglish
JournalComputer Animation and Virtual Worlds
DOIs
Publication statusE-pub ahead of print - 31 May 2022

Bibliographical note

Funding Information:
This work was supported in part by The Hong Kong Polytechnic University under Grant P0030419, Grant P0030929, and Grant P0035358, and in part by the Hong Kong Institute of Business Studies Research Seed Fund under Grant HKIBS RSF‐212‐004.

Publisher Copyright:
© 2022 John Wiley & Sons, Ltd.

Keywords

  • motion deblurring
  • scale-invariant input
  • variant-depth network

Fingerprint

Dive into the research topics of 'VDN: Variant‐depth network for motion deblurring'. Together they form a unique fingerprint.

Cite this