An End-to-End Video Coding Method via Adaptive Vision Transformer

Haoyan YANG, Mingliang ZHOU, Zhaowei SHANG, Huayan PU, Jun LUO, Xiaoxu HUANG, Shilong WANG, Huajun CAO, Xuekai WEI*, Weizhi XIAN

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Deep learning-based video coding methods have demonstrated superior performance compared to classical video coding standards in recent years. The vast majority of the existing deep video coding (DVC) networks are based on convolutional neural networks (CNNs), and their main drawback is that since CNNs are affected by the size of the receptive field, they cannot effectively handle long-range dependencies and local detail recovery. Therefore, how to better capture and process the overall structure as well as local texture information in the video coding task is the core issue. Notably, the transformer employs a self-attention mechanism that captures dependencies between any two positions in the input sequence without being constrained by distance limitations. This is an effective solution to the problem described above. In this paper, we propose end-to-end transformer-based adaptive video coding (TAVC). First, we compress the motion vector and residuals through a compression network built on the vision transformer (ViT) and design the motion compensation network based on ViT. Second, based on the requirement of video coding to adapt to different resolution inputs, we introduce a position encoding generator (PEG) as adaptive position encoding (APE) to maintain its translation invariance across different resolution video coding tasks. The experiment shows that for multiscale structural similarity index measurement (MS-SSIM) metrics, this method exhibits significant performance gaps compared to conventional engineering codecs, such as ×264, ×265, and VTM-15.2. We also achieved a good performance improvement compared to the CNN-based DVC methods. In the case of peak signal-to-noise ratio (PSNR) evaluation metrics, TAVC also achieves good performance.

Original languageEnglish
Article number2354023
JournalInternational Journal of Pattern Recognition and Artificial Intelligence
Volume38
Issue number1
DOIs
Publication statusPublished - Jan 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© World Scientific Publishing Company.

Keywords

  • Deep video coding
  • motion estimation
  • position encoding
  • Swin transformer

Fingerprint

Dive into the research topics of 'An End-to-End Video Coding Method via Adaptive Vision Transformer'. Together they form a unique fingerprint.

Cite this