GAN

Generative Adversarial Network

This repo contains the model and the notebook to this this Keras example on WGAN.<br> Full credits to: A_K_Nain<br> Space link : Demo

Wasserstein GAN (WGAN) with Gradient Penalty (GP)

Original Paper Of WGAN : Paper<br> Wasserstein GANs With Gradient Penalty : Paper

The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.

The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.

<details> <summary>View Model Summary</summary>

Generator Discriminator

</details>