You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
In the paper, for FFHQ 1k 256x256 training with DiffAugment, it is written that path length regularization and lazy regularization is disabled. If I am not wrong, in the DiffAugment-stylegan2-pytorch repo lazy regularization and path length regularization is still there, right?
Just wanted to confirm this before I start any training. :)
Thanks!
The text was updated successfully, but these errors were encountered:
Yes. You may need to change them and possibly some other hyperparameters as said in the paper, to fully reproduce our results in the TensorFlow version.
Hi,
In the paper, for FFHQ 1k 256x256 training with DiffAugment, it is written that path length regularization and lazy regularization is disabled. If I am not wrong, in the DiffAugment-stylegan2-pytorch repo lazy regularization and path length regularization is still there, right?
Just wanted to confirm this before I start any training. :)
Thanks!
The text was updated successfully, but these errors were encountered: