You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding gradient penalty. I believe the L2-norm calculation of the gradient is incorrect. It should be sum across all axes except the batch axis because the norm of a matrix should be a scalar value. However, your L2-norm calculation returns a matrix not a scalar value. Because you are summing across the channel axis only. Is there a special reason for that?
Hi,
I have a question regarding gradient penalty. I believe the L2-norm calculation of the gradient is incorrect. It should be sum across all axes except the batch axis because the norm of a matrix should be a scalar value. However, your L2-norm calculation returns a matrix not a scalar value. Because you are summing across the channel axis only. Is there a special reason for that?
vagan-code/vagan/model_vagan.py
Line 134 in 6be0b51
Best Regards,
Cem
The text was updated successfully, but these errors were encountered: