On the effectiveness of least squares generative adversarial networks

Xudong MAO, Qing LI, Haoran XIE, Raymond Yiu Keung LAU, Zhen WANG, Stephen Paul SMOLLEY

Research output: Journal PublicationsJournal Article (refereed)

8 Citations (Scopus)

Abstract

Unsupervised learning with generative adversarial networks (GANs) has proven to be hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss for both the discriminator and the generator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ 2 divergence. We also show that the derived objective function that yields minimizing the Pearson χ 2 divergence performs better than the classical one of using least squares for classification. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stably during the learning process. For evaluating the image quality, we conduct both qualitative and quantitative experiments, and the experimental results show that LSGANs can generate higher quality images than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. We conduct three experiments, including Gaussian mixture distribution, difficult architectures, and a newly proposed method - datasets with small variability, to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.

Original languageEnglish
Pages (from-to)2947-2960
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number12
Early online date24 Sep 2018
DOIs
Publication statusPublished - 1 Dec 2019
Externally publishedYes

Fingerprint

Least Squares
Image quality
Discriminators
Unsupervised learning
Penalty
Gradient
Classifiers
Entropy
Experiments
Image Quality
Loss Function
Learning Process
Objective function
Entropy Loss
Cross-entropy
Mixture Distribution
Entropy Function
Gaussian Mixture
Unsupervised Learning
Experimental Results

Keywords

  • Gallium nitride
  • Generative adversarial networks
  • generative model
  • Generators
  • image generation
  • Least squares GANs
  • Linear programming
  • Stability analysis
  • Task analysis
  • Training
  • x2 divergence

Cite this

MAO, Xudong ; LI, Qing ; XIE, Haoran ; LAU, Raymond Yiu Keung ; WANG, Zhen ; SMOLLEY, Stephen Paul. / On the effectiveness of least squares generative adversarial networks. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019 ; Vol. 41, No. 12. pp. 2947-2960.
@article{b25a2dc6ab4c4cfc8243d11dadf8faa9,
title = "On the effectiveness of least squares generative adversarial networks",
abstract = "Unsupervised learning with generative adversarial networks (GANs) has proven to be hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss for both the discriminator and the generator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ 2 divergence. We also show that the derived objective function that yields minimizing the Pearson χ 2 divergence performs better than the classical one of using least squares for classification. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stably during the learning process. For evaluating the image quality, we conduct both qualitative and quantitative experiments, and the experimental results show that LSGANs can generate higher quality images than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. We conduct three experiments, including Gaussian mixture distribution, difficult architectures, and a newly proposed method - datasets with small variability, to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
keywords = "Gallium nitride, Generative adversarial networks, generative model, Generators, image generation, Least squares GANs, Linear programming, Stability analysis, Task analysis, Training, x2 divergence",
author = "Xudong MAO and Qing LI and Haoran XIE and LAU, {Raymond Yiu Keung} and Zhen WANG and SMOLLEY, {Stephen Paul}",
year = "2019",
month = "12",
day = "1",
doi = "10.1109/TPAMI.2018.2872043",
language = "English",
volume = "41",
pages = "2947--2960",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "12",

}

On the effectiveness of least squares generative adversarial networks. / MAO, Xudong; LI, Qing; XIE, Haoran; LAU, Raymond Yiu Keung; WANG, Zhen; SMOLLEY, Stephen Paul.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 41, No. 12, 01.12.2019, p. 2947-2960.

Research output: Journal PublicationsJournal Article (refereed)

TY - JOUR

T1 - On the effectiveness of least squares generative adversarial networks

AU - MAO, Xudong

AU - LI, Qing

AU - XIE, Haoran

AU - LAU, Raymond Yiu Keung

AU - WANG, Zhen

AU - SMOLLEY, Stephen Paul

PY - 2019/12/1

Y1 - 2019/12/1

N2 - Unsupervised learning with generative adversarial networks (GANs) has proven to be hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss for both the discriminator and the generator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ 2 divergence. We also show that the derived objective function that yields minimizing the Pearson χ 2 divergence performs better than the classical one of using least squares for classification. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stably during the learning process. For evaluating the image quality, we conduct both qualitative and quantitative experiments, and the experimental results show that LSGANs can generate higher quality images than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. We conduct three experiments, including Gaussian mixture distribution, difficult architectures, and a newly proposed method - datasets with small variability, to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.

AB - Unsupervised learning with generative adversarial networks (GANs) has proven to be hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss for both the discriminator and the generator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ 2 divergence. We also show that the derived objective function that yields minimizing the Pearson χ 2 divergence performs better than the classical one of using least squares for classification. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stably during the learning process. For evaluating the image quality, we conduct both qualitative and quantitative experiments, and the experimental results show that LSGANs can generate higher quality images than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. We conduct three experiments, including Gaussian mixture distribution, difficult architectures, and a newly proposed method - datasets with small variability, to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.

KW - Gallium nitride

KW - Generative adversarial networks

KW - generative model

KW - Generators

KW - image generation

KW - Least squares GANs

KW - Linear programming

KW - Stability analysis

KW - Task analysis

KW - Training

KW - x2 divergence

UR - http://www.scopus.com/inward/record.url?scp=85054646594&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2018.2872043

DO - 10.1109/TPAMI.2018.2872043

M3 - Journal Article (refereed)

C2 - 30273144

AN - SCOPUS:85054646594

VL - 41

SP - 2947

EP - 2960

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 12

ER -