Neural Network Based Multi-Level In-Loop Filtering for Versatile Video Coding

Linwei ZHU, Yun ZHANG, Na LI, Wenhui WU, Shiqi WANG, Sam KWONG

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

To further improve the performance of Versatile Video Coding (VVC), a neural network based multi-level in-loop filtering framework for luma and chroma is presented in this letter, which includes Reference pixel Level (RL), Coding tree unit Level (CL), and Frame Level (FL). The neural network based filters in these levels can be flexibly enabled. In RL, the coding performance upper bound is analyzed and asymmetric convolution is designed. In CL, the pixels located at the bottom and rightmost have been assigned greater weights for loss calculation during training. In addition, the co-located luma is adopted in CL and FL chroma filtering for guiding chroma enhancement due to the high correlation between them. For the architecture of neural network, two input channel fusion schemes are combined to enjoy both of their benefits. Extensive experimental results show that the proposed multi-level in-loop filtering method can achieve 6.87%, 32.8%, and 36.9% bit rate reductions on average for Y, U, and V components under all intra configuration, which outperforms the state-of-the-art works.

Original languageEnglish
Pages (from-to)1-1
JournalIEEE Transactions on Circuits and Systems for Video Technology
DOIs
Publication statusE-pub ahead of print - 28 Jun 2024

Bibliographical note

Publisher Copyright:
IEEE

Keywords

  • In-loop filtering
  • multi-level
  • neural network
  • versatile video coding

Fingerprint

Dive into the research topics of 'Neural Network Based Multi-Level In-Loop Filtering for Versatile Video Coding'. Together they form a unique fingerprint.

Cite this