Generative Adversarial Networks are a Machine Learning approach that try to learn the underlying distribution by creating fake samples using generative modelling. GANs consist of a generator and a discriminator that form a two player game. The generator tries to create fake samples that seem to come from the required domain, while the discriminator tries to distinguish between real and generated samples. In doing so, the generator and discriminator compete against each other in a two player minimax game.
The adversarial training between the generator and discriminator can be formulated as a min-max optimization problem. However, proving the stability of GANs is not trivial. In this project, we will first understand the architecture of GANs and analyse the stability of traditional GANs.We will then extend this analysis to other variations of GANs and incorporate it into the architecture to improve the stability.
Understanding the architecture of Generative Adversarial Networks, and viewing the adversarial training as a two-player game.
Implementing GANs for a simple regression or generative task to develop a clear understanding of GANs
Analyzing the stability of traditional GANs and Wasserstein GANs.
Introducing a skip-GAN which adds another model to process the data prior to the GAN via a skip connection with the GAN. Analyzing the stability of the skip-GAN and running experiments to compare the stability performance of the skip GAN with the traditional GAN
Studying the model dynamics with different prior models and variations in the architecture.
Improving the stability of the architecture by incorporating these findings.
Basic understanding of Machine Learning
Familiarity with differential equations