Zero-quantised discrete cosine transform prediction technique for video encoding

H. WANG*, S. KWONG, C. W. KOK, M. Y. CHAN

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

2 Citations (Scopus)

Abstract

A new analytical model to eliminate redundant discrete cosine transform (DCT) and quantisation (Q) computations in block-based video encoders is proposed. The dynamic ranges of the quantised DCT coefficients are analysed, then a threshold scheme is derived to determine whether the DCT and Q computations can be skipped without video quality degradation. In addition, fast DCT/inverse DCT (IDCT) algorithms are presented to implement the proposed analytical model. The proposed analytical model is compared with other comparable analytical models reported in the literature. Both the theoretical analysis and experimental results demonstrate that the proposed analytical model can greatly reduce the computational complexity of video encoding without any performance degradation and outperforms other analytical models.

Original languageEnglish
Pages (from-to)677-683
Number of pages7
JournalIEE Proceedings: Vision, Image and Signal Processing
Volume153
Issue number5
DOIs
Publication statusPublished - Oct 2006
Externally publishedYes

Fingerprint

Dive into the research topics of 'Zero-quantised discrete cosine transform prediction technique for video encoding'. Together they form a unique fingerprint.

Cite this