Abstract
The popularity of template-generated videos has recently experienced a significant increase on social media platforms. In general, videos from the same template share similar temporal characteristics, which are unfortunately ignored in the current compression schemes. In view of this, we aim to examine how such temporal priors from templates can be effectively utilized during the compression process for template-generated videos. First, a comprehensive statistical analysis is conducted, revealing that the coding decisions, including the merge, non-affine, and motion information, across template-generated videos are strongly correlated. Subsequently, leveraging such correlations as prior knowledge, a simple yet effective prior-driven compression scheme for template-generated videos is proposed. In particular, a mode decision pruning algorithm is devised to dynamically skip unnecessarily advanced motion vector prediction (AMVP) or affine AMVP decisions. Moreover, an improved AMVP motion estimation algorithm is applied to further accelerate reference frame selection and the motion estimation process. Experimental results on the versatile video coding (VVC) platform VTM-23.0 demonstrate that the proposed scheme achieves moderate time reductions of 14.31% and 14.99% under the Low-Delay P (LDP) and Low-Delay B (LDB) configurations, respectively, while maintaining negligible increases in Bjøntegaard Delta Rate (BD-Rate) of 0.15% and 0.18%, respectively.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Circuits and Systems for Video Technology |
| Early online date | 18 Aug 2025 |
| DOIs | |
| Publication status | E-pub ahead of print - 18 Aug 2025 |
Bibliographical note
Publisher Copyright:© 1991-2012 IEEE.
Keywords
- inter prediction
- motion estimation
- Template-generated videos
- temporal priors
- video compression