Unpaired Image Enhancement with Quality-Attention Generative Adversarial Network

Zhangkai NI, Wenhan YANG, Shiqi WANG, Lin MA, Sam KWONG

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

15 Citations (Scopus)

Abstract

In this work, we aim to learn an unpaired image enhancement model, which can enrich low-quality images with the characteristics of high-quality images provided by users. We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data based on the bidirectional Generative Adversarial Network (GAN) embedded with a quality attention module (QAM). The key novelty of the proposed QAGAN lies in the injected QAM for the generator such that it learns domain-relevant quality attention directly from the two domains. More specifically, the proposed QAM allows the generator to effectively select semantic-related characteristics from the spatial-wise and adaptively incorporate style-related attributes from the channel-wise, respectively. Therefore, in our proposed QAGAN, not only discriminators but also the generator can directly access both domains which significantly facilitate the generator to learn the mapping function. Extensive experimental results show that, compared with the state-of-the-art methods based on unpaired learning, our proposed method achieves better performance in both objective and subjective evaluations.
Original languageEnglish
Title of host publicationMM '20: Proceedings of the 28th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery
Pages1697-1705
Number of pages9
ISBN (Print)9781450379885
DOIs
Publication statusPublished - Oct 2020
Externally publishedYes
Event28th ACM International Conference on Multimedia (MM 2020) - Virtual, Seattle, United States
Duration: 12 Oct 202016 Oct 2020
https://2020.acmmm.org/

Conference

Conference28th ACM International Conference on Multimedia (MM 2020)
Country/TerritoryUnited States
CitySeattle
Period12/10/2016/10/20
Internet address

Bibliographical note

The authors would like to thank the anonymous referees for their insightful comments and suggestions. This work was supported in part by the Hong Kong RGC General Research Funds under Grant 9042322 (CityU 11200116), Grant 9042489 (CityU 11206317), and Grant 9042816 (CityU 11209819), and in part by the Natural Science Foundation of China under Grant 61672443.

Keywords

  • computer vision
  • image processing
  • unpaired image enhancement

Fingerprint

Dive into the research topics of 'Unpaired Image Enhancement with Quality-Attention Generative Adversarial Network'. Together they form a unique fingerprint.

Cite this