Skip to main navigation Skip to search Skip to main content

When Video Compression Meets Multimodal Large Language Models: A Unified Paradigm for Cross-Modality Video Compression

  • Pingping ZHANG
  • , Jinlong LI
  • , Kecheng CHEN
  • , Meng WANG
  • , Long XU
  • , Haoliang LI
  • , Nicu SEBE
  • , Sam KWONG
  • , Shiqi WANG

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Traditional video compression methods perform well at high bitrates but struggle to preserve fine-grained semantic information at low bitrates. Recently, with the blossoming of Multimodal Large Language Models (MLLMs), Cross-modal compression techniques offer prospective solutions for improving video compression under low-bitrate conditions. In this paper, we propose a unified Cross-Modality Video Compression (CMVC) framework that integrates multimodal representations and video generative models. The encoder disentangles video into spatial and temporal components, which are mapped to compact cross modal representations using MLLMs. During decoding, different encoding-decoding modes are employed to acquire various video reconstruction qualities, including Text-Text-to-Video (TT2V) for semantic preservation and Image-Text-to-Video (IT2V) for perceptual consistency. Additionally, we elaborate on an efficient frame interpolation model using Low-Rank Adaptation (LoRA) to improve the perceptual quality. Experimental results demon strate that TT2V achieves effective semantic reconstruction, while IT2V ensures competitive perceptual consistency. These findings suggest the potential of leveraging multimodal priors to improve video compression, offering promising future research directions.
Original languageEnglish
Pages (from-to)1-5
Number of pages5
JournalIEEE Signal Processing Letters
DOIs
Publication statusE-pub ahead of print - 11 Mar 2026

Bibliographical note

Publisher Copyright:
© 1994-2012 IEEE.

Keywords

  • Video
  • multimodal representations
  • semantic reconstruction
  • multimodal large language Models

Fingerprint

Dive into the research topics of 'When Video Compression Meets Multimodal Large Language Models: A Unified Paradigm for Cross-Modality Video Compression'. Together they form a unique fingerprint.

Cite this