CNN-based rate-distortion modeling for H.265/HEVC

Bin XU, Xiang PAN, Yan ZHOU, Yiming LI, Daiqin YANG, Zhenzhong CHEN*

*Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

28 Citations (Scopus)

Abstract

In this paper, we propose a convolutional neural network (CNN)-based rate-distortion (R-D) modeling method for H.265/HEVC. A fully convolutional neural network (CNN) is designed to learn end-to-end, pixels-to-pixels mappings from the original images to the structural similarity (SSIM) maps indicating distortion. The rate information is predicted through a CNN with fully connected layers as well. When compared to traditional CNN methods, the proposed mappings to the distortion or rate information. The experiments demonstrate the feasibility of our CNN-based framework for rate-distortion modeling.
Original languageEnglish
Title of host publicationIEEE Visual Communications and Image Processing, VCIP 2017
PublisherIEEE
Pages1-4
Number of pages4
ISBN (Electronic)9781538604625
ISBN (Print)9781538604632
DOIs
Publication statusPublished - Dec 2017
Externally publishedYes
Event2017 IEEE Visual Communications and Image Processing (VCIP 2017) - St. Petersburg, United States
Duration: 10 Dec 201713 Dec 2017

Conference

Conference2017 IEEE Visual Communications and Image Processing (VCIP 2017)
Country/TerritoryUnited States
CitySt. Petersburg
Period10/12/1713/12/17

Bibliographical note

Publisher Copyright:
© 2017 IEEE.

Funding

This work was supported in part by National Natural Science Foundation of China (No. 61471273), National Hightech R&D Program of China (863 Program, 2015AA015903), Natural Science Foundation of Hubei Province of China (No. 2015CFA053) and LIESMARS Special Research Funding.

Keywords

  • convolutional neural network (CNN)
  • deep learning
  • H.265/HEVC
  • Rate-distortion model
  • structural similarity

Fingerprint

Dive into the research topics of 'CNN-based rate-distortion modeling for H.265/HEVC'. Together they form a unique fingerprint.

Cite this