Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confuse about the define of placeholder #12

Open
meeio opened this issue Nov 30, 2017 · 1 comment
Open

confuse about the define of placeholder #12

meeio opened this issue Nov 30, 2017 · 1 comment

Comments

@meeio
Copy link

meeio commented Nov 30, 2017

when I change the position of placeholder of x, it goes wrong
the original is:

self.z = tf.placeholder(tf.float32, shape=(params.batch_size, 1), name="z")

with tf.variable_scope('G'):
 ……

self.x = tf.placeholder(tf.float32, shape=(params.batch_size, 1), name="x")

with tf.variable_scope('D'):
 ……

I change to:

self.z = tf.placeholder(tf.float32, shape=(params.batch_size, 1), name="z")
self.x = tf.placeholder(tf.float32, shape=(params.batch_size, 1), name="x")

with tf.variable_scope('G'):
 ……

with tf.variable_scope('D'):
 ……

the training result is completely wrong. I am so confused, can anyone tell me why this change will effect the result.

@meeio
Copy link
Author

meeio commented Nov 30, 2017

the original code is :

class GAN(object):
    def __init__(self, params):
        # This defines the generator network - it takes samples from a noise
        # distribution as input, and passes them through an MLP.
        with tf.variable_scope('G'):
            self.z = tf.placeholder(tf.float32, shape=(params.batch_size, 1))
            self.G = generator(self.z, params.hidden_size)

        # The discriminator tries to tell the difference between samples from
        # the true data distribution (self.x) and the generated samples
        # (self.z).
        #
        # Here we create two copies of the discriminator network
        # that share parameters, as you cannot use the same network with
        # different inputs in TensorFlow.
        self.x = tf.placeholder(tf.float32, shape=(params.batch_size, 1))
        with tf.variable_scope('D'):
            self.D1 = discriminator(
                self.x,
                params.hidden_size,
                params.minibatch
            )
        with tf.variable_scope('D', reuse=True):
            self.D2 = discriminator(
                self.G,
                params.hidden_size,
                params.minibatch
            )

        # Define the loss for discriminator and generator networks
        # (see the original paper for details), and create optimizers for both
        self.loss_d = tf.reduce_mean(-log(self.D1) - log(1 - self.D2))
        self.loss_g = tf.reduce_mean(-log(self.D2))

        vars = tf.trainable_variables()
        self.d_params = [v for v in vars if v.name.startswith('D/')]
        self.g_params = [v for v in vars if v.name.startswith('G/')]

        self.opt_d = optimizer(self.loss_d, self.d_params)
        self.opt_g = optimizer(self.loss_g, self.g_params)

self.G is not relevent to self.x, I am not understand why define self.x earlier will affect the result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant