Abstract
Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a ℓ2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images.
Original language | English |
---|---|
Article number | 9204448 |
Pages (from-to) | 9140-9151 |
Number of pages | 12 |
Journal | IEEE Transactions on Image Processing |
Volume | 29 |
Early online date | 22 Sept 2020 |
DOIs | |
Publication status | Published - 2020 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1992-2012 IEEE.
Funding
This work was supported in part by the Natural Science Foundation of China under Grant 61772344 and Grant 61672443, in part by the Hong Kong Research Grants Council (RGC) General Research Funds under Grant 9042816 (CityU 11209819) and Grant 9042957 (CityU 11203220), in part by the Hong Kong Research Grants Council (RGC) Early Career Scheme under Grant 9048122 (CityU 21211018), and in part by the Key Project of Science and Technology Innovation 2030 supported by the Ministry of Science and Technology of China under Grant 2018AAA0101301.
Keywords
- generative adversarial network
- global attention
- image enhancement
- Unsupervised learning