Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gradient Penalty #2

Open
HCA97 opened this issue Sep 15, 2019 · 1 comment
Open

Gradient Penalty #2

HCA97 opened this issue Sep 15, 2019 · 1 comment

Comments

@HCA97
Copy link

HCA97 commented Sep 15, 2019

Hi,

I have a question regarding gradient penalty. I believe the L2-norm calculation of the gradient is incorrect. It should be sum across all axes except the batch axis because the norm of a matrix should be a scalar value. However, your L2-norm calculation returns a matrix not a scalar value. Because you are summing across the channel axis only. Is there a special reason for that?

slopes = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1))

Best Regards,
Cem

@rmukh
Copy link

rmukh commented Feb 4, 2020

@HCA97, I guess you are right. It should be smth like:

slopes = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=[1, 2, 3]))
penalty = self.exp_config.scale * tf.reduce_mean(tf.square(slopes - 1.0))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants