-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deep Convolutional GAN Poor Result Quality #11
Comments
Dear @gmcmacran, Thank you for taking the time to read our book and for your feedback. I'm glad you found it helpful in your learning journey. Regarding the questions you raised:
Regarding the discrepancies in result, I rechecked the code- there is a small error- the present code is training only for the batch size- instead of half images as mentioned in the comments. I have corrected it, and you can access the correct notebook- with name DCGAN_MNIST.ipynb. I hope this helps answer your questions. If you have any more queries, feel free to reach out. |
Thanks for responding. Follow up question to question 3 above. In the deep convolutional GAN, are weights for the discriminator getting updated after calling train_on_batch? If yes, what does setting self.discriminator.trainable = False do inside the initialization function? If no, what is the purpose of self.discriminator.train_on_batch() in the train function? |
First, I read you book and really enjoyed it. I liked how clearly you explained concepts and provided code within the text. I learned a lot.
Github code: The code on GitHub is https://github.com/PacktPublishing/Deep-Learning-with-TensorFlow-2-and-Keras/blob/master/Chapter%206/DCGAN.ipynb
Book: Deep Learning with TensorFlow 2 and Keras Second Edition
I am working on recreating the deep convolutional GAN starting on page 198 and finding the quality of results changes drastically run to run. I would like to rule out a few discrepancies between the book and the current GitHub code.
On page 199, the book says the learning rate is 0.002. Two zeros after the decimal place. On GitHub and in book’s code, the learning rate 0.0002. Three zeros after the decimal place. Which one is correct?
On page 200, the text says noise is 100 dimensions. In the book, the code is aligned to this. Z has a default value of 100 and is unchanged when the instance is created. On GitHub, the default is 10. Not 100. Is one value preferred over the other? Is this an insignificant choice?
For the vanilla gan, the discriminator's weights are set to trainable before calling discriminator.train_on_batch. Then the weights are reverted back to not trainable when gan.train_on_batch is called. For the deep convolutional GAN, the discriminator’s weights are never set to trainable. Was this on purpose?
For the vanilla gan, the discriminator receives one set of fake data. Then a different random sample is created and the GAN is trained. For the deep convolutional gan, the discriminator and gan are trained on the same random sample of noise. Is there a reason for the difference? Is this an insignificant choice?
The text was updated successfully, but these errors were encountered: