-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to implement Autoencoder in Pytorch #15
Comments
+1 @charlesq34 |
It is helpful to use only one category in datatset (suppose ShapeNet) so as to train this autoencoder model. |
@skyir0n - the above result is from training only one category from the dataset. |
In your code, batch normalization is performed by using the same BN module. Due to learnable paramerters in a BN module (alpha and gamma for scaling and shifting), they should be defined, respectively, at each layer. |
@dhirajsuvarna hi, did you solve the problem? what was going wrong? |
@dvirginz , @siddharthKatageri, |
@dhirajsuvarna @dvirginz
I have used chamfer distance as a loss function provided by pytorch3d. |
Hi @siddharthKatageri, I'm using all a smaller version of ModelNet with 10 classes. Did you use only one of them? Any additional hint? |
I have the same issue as well. Is there some tricks on training the pointnet ae? I think pointnet ae is not the best choice for reconstructing the point cloud since MLP is not able to control the permutation invariance but it should work rather than a cluster cloud. |
same issue with u guys, i tried to use emd loss, the reconstructed point cloud always looks like a strange cube. |
Hi all, are you training the auto encoder on all the classes? I was able to obtain good reconstructions only with per-class trainings... |
thank!wondering how many points u sampled during training for each batch? |
Thank you for reporting the work. But an AE should scale up for multiple classes too. Isn't it? |
Also @saltoricristiano check this out: I hope you are doing the same |
I tested with @dhirajsuvarna 's code(https://github.com/dhirajsuvarna/pointnet-autoencoder-pytorch). In my case, the shape of tensor inputs was wrong(specifically for the chamfer loss module)(https://github.com/dhirajsuvarna/pointnet-autoencoder-pytorch/blob/3bb4a90a8bc016c1d3ab3ab7433f039fb3759196/train_shapenet.py#L64-L83) for chamfer loss, the tensor shape has to be (batch_size, num_points, num_dim). points = data
points = points.transpose(2, 1)
points = points.to(device)
optimizer.zero_grad()
reconstructed_points, latent_vector = autoencoder(points)
points = points.transpose(1, 2)
reconstructed_points = reconstructed_points.transpose(1, 2)
dist1, dist2 = chamfer_dist(points, reconstructed_points)
train_loss = (torch.mean(dist1)) + (torch.mean(dist2)) plus, there was just 3 batch norm layer while applied layers are 7. With 3 batchnorm layer, it works well in train mode but it doesn't work well in eval mode of autoencoder. after changing this, work well. |
Thanks for sharing this correct form that Chamfer Distance requires. Appreciated your help in advance. |
Hi,
I am trying to implement autendoer in pytorch and I did write the model which I suppose is excatly what is present in this repo.
Model in pytorch
However, after training this model for 200 ephocs, when I try to generate the output point cloud all i can generate if scatterd points as shown below -
Any direction to figure out the problem would be helpful.
The text was updated successfully, but these errors were encountered: