Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About noise optimize in image2styleGAN++ #1

Open
rainsoulsrx opened this issue Dec 31, 2019 · 6 comments
Open

About noise optimize in image2styleGAN++ #1

rainsoulsrx opened this issue Dec 31, 2019 · 6 comments

Comments

@rainsoulsrx
Copy link

Hi, thank you for your good work, I have a question, In paper image2styleGAN++, the author mentioned that they both optimize w and n(noise) , but in your code, I only find w, and find nothing about noise optimize process

@pacifinapacific
Copy link
Owner

pacifinapacific commented Dec 31, 2019

Thanks for your questions. As you say, optimizing n should produce better images. However, I was satisfied with the quality of the image by the optimization of w only .
Also, to give n to the optimizer, the styleGAN implementation needed to be slightly modified, so I skipped that process. Sorry

@yosefyehoshua
Copy link

yosefyehoshua commented Jun 14, 2020

Hi, you said that for optimize n (noise) you need to slightly modified styleGAN implementation, I'm trying to add this optimization but can't see why & where i need to modify the styleGAN code. I would be happy for some advice :)

@pacifinapacific
Copy link
Owner

Noise is generated dynamically in the layer of StyleGAN. To pass it to the optimizer, you need to keep it as a parameter in the class But I don't know a good idea

noise = torch.randn(x.size(0), 1, x.size(2), x.size(3), device=x.device, dtype=x.dtype)

@yosefyehoshua
Copy link

yosefyehoshua commented Jun 17, 2020

thanks for the answer!

maybe i got this wrong.. but in stylegan the noise generated is constant, so passing it to the optimizer like in:

dlatent=torch.zeros((1,18,512),requires_grad=True,device=device)
optimizer=optim.Adam({dlatent},lr=0.01,betas=(0.9,0.999),eps=1e-8)

so that:
optimizer=optim.Adam({**self.noise**},lr=0.01,betas=(0.9,0.999),eps=1e-8)
feels weird.

I would be happy it you could shed some light :)

@GreenLimeSia
Copy link

@yosefyehoshua maybe you should do this:

noise_params = G.static_noise(trainable=True)
dlatent=torch.zeros((1,18,512),requires_grad=True,device=device) 
optimizer_dlatent = optim.Adam([dlatent], lr=0.01, betas=(0.9, 0.999), eps=1e-8)
optimizer_noise = optim.Adam(noise_params, lr=0.01, betas=(0.9, 0.999), eps=1e-8)

Iamge2stylegan++ paper recommends alternative recommend to use alternating optimization, but each set of variables
is only optimized once. First, optimize w, then n. so, we should adopt this recommendation. The aim of doing this is that we can optimize the latent with noise trainable. noise_params is a list of noise tensors.

@wold21
Copy link

wold21 commented Nov 23, 2020

@yosefyehoshua maybe you should do this:

noise_params = G.static_noise(trainable=True)
dlatent=torch.zeros((1,18,512),requires_grad=True,device=device) 
optimizer_dlatent = optim.Adam([dlatent], lr=0.01, betas=(0.9, 0.999), eps=1e-8)
optimizer_noise = optim.Adam(noise_params, lr=0.01, betas=(0.9, 0.999), eps=1e-8)

Iamge2stylegan++ paper recommends alternative recommend to use alternating optimization, but each set of variables
is only optimized once. First, optimize w, then n. so, we should adopt this recommendation. The aim of doing this is that we can optimize the latent with noise trainable. noise_params is a list of noise tensors.

Can it be applied directly to image2stylegan too? I don't know how to add it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants