Search

Zero-Centered GP: Which Training Methods for GANs do actually Converge?

Summary

propose a regularization term to stabilize GANs by adjusting the eigenvalues of the Jacobian matrix.

Strength

Weakness

Improvement Points

use alternating instead of simultaneous gradient descent
don’t use momentum (θ_t+1 = β˜θ_t + (1 − β)∇θ_t, θt+1 = θt + η˜θt+1, β=0)
use regularization to stabilize the training
simple zero-centered gradient penalties for the discriminator yield excellent results
progressively growing architectures might be not all that important when using a good regularizer

Objectives