Abstract
Motion Compensated Temporal Filter (MCTF) has been repeatedly proven to be an effective pre-processing tool that improves the coding performance. The philosophy is that by smoothing with temporal filter, the noise of the to-be-coded image can be reduced, thereby shrinking the prediction residuals and improving the rate-distortion (RD) performance. While abundant efforts have been devoted to the design of the MCTF filter weights, how motion vector variance and texture complexity influence MCTF has been relatively under-explored. In this work, we propose an enhanced MCTF method (EMCTF) based on multi-hypothesis reference, motion vector variance, and texture complexity. We take initial steps towards the incorporation of motion vector variance and texture complexity in the filtering weights design. Motion compensation blocks based on multi-hypothesis reference, can be efficiently aggregated in an effort to obtain the final inference. The proposed method is implemented on the top of Versatile Video Encoder (VVenC). Experimental results show that for faster preset, fast preset, medium preset and slow preset, the proposed EMCTF achieves 0.85%, 0.91%, 0.85% and 0.82% Bjøntegaard delta rate (BD-rate) savings, respectively. Moreover, the EMCTF introduces little additional encoding complexity increase, facilitating its future applications in real-world scenarios.
Original language | English |
---|---|
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Early online date | 26 Jun 2024 |
DOIs | |
Publication status | Published - 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Complexity theory
- Encoding
- Information filters
- motion compensated temporal filter
- Motion compensation
- Motion estimation
- motion estimation
- Noise
- pre-processing
- Vectors
- Video coding
- VVenC