Enhanced Context Mining and Filtering for Learned Video Compression

Haifeng GUO, Sam KWONG, Dongjie YE, Shiqi WANG

Research output: Journal PublicationsJournal Article (refereed)peer-review

1 Citation (Scopus)

Abstract

The Deep Contextual Video Compression framework (DCVC) utilizes a conditional coding paradigm, where the context is extracted and employed as a condition for the contextual encoder-decoder and entropy model. In this paper, we propose enhanced context mining and filtering to improve the compression efficiency of DCVC. Firstly, considering the context of DCVC is generated without supervision and redundancy may exist among context channels, an enhanced context mining model is proposed to mitigate redundancy across context channels to obtain superior context features. Then, we introduce a transformer-based enhancement network as a filtering module to capture long-distance dependencies and further enhance compression efficiency. The transformer-based enhancement adopts a full-resolution pipeline and calculates self-attention across channel dimensions. By combining the local modeling ability of the enhanced context mining model and the non-local modeling ability of the transformer-based enhancement network, our model outperforms LDP configurations of Versatile Video Coding (VVC), achieving an average bit savings of 6.7% in terms of MS-SSIM.
Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalIEEE Transactions on Multimedia
Early online date18 Sept 2023
DOIs
Publication statusE-pub ahead of print - 18 Sept 2023

Bibliographical note

Publisher Copyright:
IEEE

Keywords

  • Codes
  • Context modeling
  • end-to-end training approach
  • enhanced context mining
  • Entropy
  • Filtering
  • Image coding
  • in loop filtering
  • Learned video compression
  • Transformers
  • Video compression

Fingerprint

Dive into the research topics of 'Enhanced Context Mining and Filtering for Learned Video Compression'. Together they form a unique fingerprint.

Cite this