-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot reproduce the results reported in the Paper (CD=2.723) #8
Comments
So, what's the problem you are facing now? |
Hi author, thanks for the amazing work. On your released pre-trained model, I can get 0.7082 F-score, 2.722 CD. "You need to train the whole network with Chamfer Distance." --- It reaches 4.588 CD, 0.6133 F-score, which is similar with Table 7&Not Used&CD&Complete = 4.460 in your paper. "Then .. fine-tune the network with Gridding Loss + Chamfer Distance on the Coarse Point Cloud." ---- It reaches 4.536 CD, 0.6255 F-score. It was supposed to be about ~2.7, right?
"Finally, you fine-tune the network with Chamfer Distance." --- the CD didn't decrease below 4.536. I'm wondering what steps am I making mistakes? (like learning rate/loss weight of gridding loss) |
your processed ShapeNet dataset has 28974 training data samples is it because your provided dataset is not completed? |
@AlphaPav |
The PCN dataset is about 48 GB, while the released dataset is about 10 GB. Do you mean that you randomly augment each point cloud 8 times during training? |
No. I think the difference may be caused by different compression ratios. |
Hi! I also cannot reproduce the results. The highest CD I got after training three times was 5.2. May I know how many epochs you've trained for each round respectively? (i.e. CD only, CD + gridding loss, CD only) |
@SarahChane98 |
Hi there, I just tested your pretrained model on test set, and the result is close to the value reported in paper. However, when I tested on validation dataset, it reported a dense CD around 7.177. I was wondering why there is a hugh gap between CDs on val set and test set? and a dense cd around 5.087 for training set reported with pretrained model (should be the same as training dense loss if i understand correctly) |
@paulwong16 |
but why CD on test set could be even much lower than on training set? |
@paulwong16 |
well...i believe the best model should not be chosen according to the test result (instead, should be the validation result). And from the best results I could reproduce, the training loss was a little lower than val loss and test loss, and test loss was close to the val loss. Anyway, thanks for your kind reply, I will try to reproduce the result. |
@paulwong16 |
@hzxie Hi I'm wondering how you incoorporate gridding_loss in training? I have not found it in the script Thanks |
You can use the Gridding Loss here: Line 113 in 3352592
when fine-tuning the network. |
@hzxie Thank you, I tried it out and the result seems to be fitting with the expected trends. Thanks for your inspiring work;) |
Hi, I'm wondering how to fine-tune the network with the previous weight? I've tried the same configuration as your paper but the best model gets CD=4.538 and F-Score=6.206 while your pre-trained model can get CD=2.723 and F-Score=7.082. And I check the log and find that the network had converged to the optimal in 20 epochs. Why you set 150 epoch as the default? |
In my experiments, the loss will continue to decrease after 20 epochs. |
Hi, I still cannot reproduce the result. Can you provide more details? I've tried to fine-tune the framework with gridding loss and lower learning rate. But the CD score and F-score got worse. |
@Lillian9707 |
Thank you for your reply! |
hi, sorry to bother you. I still cannot reproduce the results in the paper. |
Try to fine-tune the network w/ and w/o Gridding Loss several times. |
You need to train the whole network with Chamfer Distance. It reaches CD ~0.40 on ShapeNet.
Then, you need to fine-tune the network with Gridding Loss + Chamfer Distance on the Coarse Point Cloud.
Finally, you fine-tune the network with Chamfer Distance. Chamfer Distance is taken as a metric, therefore, you cannot get lower CD without using Chamfer Distance as a loss.
Originally posted by @hzxie in #3 (comment)
The text was updated successfully, but these errors were encountered: