Abstract
Chroma intra prediction aims to reduce chroma redundancies within a frame, which plays an important role in improving the coding efficiency of intra coding. Existing chroma intra prediction methods typically utilize the spatial relationship between the current luma block and its neighboring reference luma blocks to predict its chroma samples. However, the spatial properties of luma components differ from those of chroma components, which limits the accuracy of chroma intra prediction. To tackle this issue, an efficient Exemplar Colorization Network (ECNet)-based chroma intra prediction method is proposed in this paper, in which the colorization relationship between reference luma and chroma components is exploited to predict the chroma components for the current luma component. Inspired by the principle that semantic information in an image exhibits short-range continuity, a Spatial-consistency-based Colorization Transfer Network (SCTNet) is proposed, which builds and transfers colorization representations of neighboring reference blocks for chroma prediction. To improve the chroma prediction capability of SCTNet, a colorization learning module is developed to learn the robust mapping relationship from the luma component to the chroma component in a region-to-pixel manner, and a weight-adaptive reconstruction module is designed to adaptively utilize reference information from neighboring blocks to generate an initial prediction result. In addition, to further improve the accuracy of chroma intra prediction, a multi-reference-based chroma refinement network is proposed, which simultaneously uses the spatial information of neighboring reference chroma blocks and the current luma block to eliminate blocking and color-bleeding artifacts in the initial prediction result. Experimental results demonstrate that our proposed ECNet outperforms the state-of-the-art chroma intra prediction methods in terms of coding performance.
| Original language | English |
|---|---|
| Pages (from-to) | 4713-4724 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Multimedia |
| Volume | 27 |
| DOIs | |
| Publication status | Published - 27 Jan 2025 |
Bibliographical note
Publisher Copyright:© 2025 IEEE. All rights reserved, including rights for text and data mining, and training of artificial intelligence and similar technologies.
Funding
This work was supported by the National Natural Science Foundation of China under Grant 62322116.
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
Keywords
- Chroma intra prediction
- exemplar colorization network
- intra coding
- versatile video coding
- video coding
Fingerprint
Dive into the research topics of 'Efficient Chroma Intra Prediction via Exemplar Colorization Network for Versatile Video Coding'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver