Tensorflow implementation of f-GAN (NIPS 2016) - f-GAN: Training Generative Neural Samplers Using Variational Divergence Minimization.
- make these divergences work (welcome the suggestions)
- Kullback-Leibler with tricky G loss
- Reverse-KL with tricky G loss
- Pearson-X2 with tricky G loss
- Squared-Hellinger with tricky G loss
- Jensen-Shannon with tricky G loss
- GAN with tricky G loss
- test more divergence
-
Using tricky G loss (see Section 3.2 in the paper)
Kullback-Leibler Reverse-KL Pearson-X2 Squared-Hellinger Jensen-Shannon GAN NaN -
Using theoretically correct G loss
Kullback-Leibler Reverse-KL Pearson-X2 Squared-Hellinger Jensen-Shannon GAN NaN
-
Prerequisites
- tensorflow 1.7 or 1.8
- python 2.7
-
Examples of training
-
training
CUDA_VISIBLE_DEVICES=0 python train.py --dataset=mnist --divergence=Pearson-X2 --tricky_G
-
tensorboard for loss visualization
CUDA_VISIBLE_DEVICES='' tensorboard --logdir ./output/mnist_Pearson-X2_trickyG/summaries --port 6006
-
If you find f-GAN useful in your research work, please consider citing:
@inproceedings{nowozin2016f,
title={f-GAN: Training Generative Neural Samplers Using Variational Divergence Minimization},
author={Nowozin, Sebastian and Cseke, Botond and Tomioka, Ryota},
booktitle={Advances in Neural Information Processing Systems (NIPS)},
year={2016}
}