MIEGAN : Mobile Image Enhancement via A Multi-Module Cascade Neural Network

Zhaoqing PAN, Feng YUAN, Jianjun LEI, Wanqing LI, Nam LING, Sam KWONG

Research output: Journal PublicationsJournal Article (refereed)peer-review

25 Citations (Scopus)


Visual quality of images captured by mobile devices is often inferior to that of images captured by a Digital Single Lens Reflex (DSLR) camera. This paper presents a novel generative adversarial network-based mobile image enhancement method, referred to as MIEGAN. It consists of a novel multi-module cascade generative network and a novel adaptive multi-scale discriminative network. The multi-module cascade generative network is built upon a two-stream encoder, a feature transformer, and a decoder. In the two-stream encoder, a luminance-regularizing stream is proposed to help the network focus on low-light areas. In the feature transformation module, two networks effectively capture both global and local information of an image. To further assist the generative network to generate the high visual quality images, a multi-scale discriminator is used instead of a regular single discriminator to distinguish whether an image is fake or real globally and locally. To balance the global and local discriminators, an adaptive weight allocation is proposed. In addition, a contrast loss is proposed, and a new mixed loss function is developed to improve the visual quality of the enhanced images. Extensive experiments on the popular DSLR photo enhancement dataset and MIT-FiveK dataset have verified the effectiveness of the proposed MIEGAN.
Original languageEnglish
Pages (from-to)519-533
Number of pages15
JournalIEEE Transactions on Multimedia
Early online date28 Jan 2021
Publication statusPublished - 2022
Externally publishedYes


  • contrast loss
  • mixed loss
  • Mobile image enhancement
  • multi-module cascade neural network


Dive into the research topics of 'MIEGAN : Mobile Image Enhancement via A Multi-Module Cascade Neural Network'. Together they form a unique fingerprint.

Cite this