Skip to main navigation Skip to search Skip to main content

Model-blind video denoising via frame-to-frame training

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

Modeling the processing chain that has produced a video is a difficult reverse engineering task, even when the camera is available. This makes model based video processing a still more complex task. In this paper we propose a fully blind video denoising method, with two versions off-line and on-line. This is achieved by fine-tuning a pre-trained AWGN denoising network to the video with a novel frame-to-frame training strategy. Our denoiser can be used without knowledge of the origin of the video or burst and the post-processing steps applied from the camera sensor. The on-line process only requires a couple of frames before achieving visually pleasing results for a wide range of perturbations. It nonetheless reaches state-of-the-art performance for standard Gaussian noise, and can be used off-line with still better performance.
Original languageEnglish
Title of host publicationProceedings: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
PublisherIEEE
Pages11361-11370
Number of pages10
ISBN (Electronic)9781728132938
ISBN (Print)9781728132945
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019) - Long Beach, United States
Duration: 16 Jun 201920 Jun 2019

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2019-June
ISSN (Print)1063-6919

Conference

Conference32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019)
Country/TerritoryUnited States
CityLong Beach
Period16/06/1920/06/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Keywords

  • Deep Learning
  • Low-level Vision

Fingerprint

Dive into the research topics of 'Model-blind video denoising via frame-to-frame training'. Together they form a unique fingerprint.

Cite this