Abstract
Traditional video compression methods perform well at high bitrates but struggle to preserve fine-grained semantic information at low bitrates. Recently, with the blossoming of Multimodal Large Language Models (MLLMs), Cross-modal compression techniques offer prospective solutions for improving video compression under low-bitrate conditions. In this paper, we propose a unified Cross-Modality Video Compression (CMVC) framework that integrates multimodal representations and video generative models. The encoder disentangles video into spatial and temporal components, which are mapped to compact cross modal representations using MLLMs. During decoding, different encoding-decoding modes are employed to acquire various video reconstruction qualities, including Text-Text-to-Video (TT2V) for semantic preservation and Image-Text-to-Video (IT2V) for perceptual consistency. Additionally, we elaborate on an efficient frame interpolation model using Low-Rank Adaptation (LoRA) to improve the perceptual quality. Experimental results demon strate that TT2V achieves effective semantic reconstruction, while IT2V ensures competitive perceptual consistency. These findings suggest the potential of leveraging multimodal priors to improve video compression, offering promising future research directions.
| Original language | English |
|---|---|
| Pages (from-to) | 1-5 |
| Number of pages | 5 |
| Journal | IEEE Signal Processing Letters |
| DOIs | |
| Publication status | E-pub ahead of print - 11 Mar 2026 |
Bibliographical note
Publisher Copyright:© 1994-2012 IEEE.
Keywords
- Video
- multimodal representations
- semantic reconstruction
- multimodal large language Models
Fingerprint
Dive into the research topics of 'When Video Compression Meets Multimodal Large Language Models: A Unified Paradigm for Cross-Modality Video Compression'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver