Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End-to-end newtwork prediction parameters #9

Open
ideas-man opened this issue Nov 23, 2020 · 2 comments
Open

End-to-end newtwork prediction parameters #9

ideas-man opened this issue Nov 23, 2020 · 2 comments

Comments

@ideas-man
Copy link

Hey @Fang-Haoshu !

Can you please post the values of hyperparams you used when training your network presented in the paper (and the checkpoint's epoch value if you used earlier stop to get the scores presented)? I am trying to reproduce your results here for my work, and right now I am no where near with these values:

numer of points: 20000
num of views: 300
max epoch: 90
batch size: 4
learning rate: 0.001
weight decay: 0
bn decay step every: 10
bn decay rate: 0.5
learning rate decay steps: 40, 60, 80
learning rate decay rates: 0.1, 0.1, 0.1

Overfitting occurs quite early (at around 20th-25th epoch), while the paper states that you were training for at least 100 epochs. Am I missing something?

The initial learning rate is 0.001 and the batch size is 4. The learning rate is decreased to 0.0001 after 60 epochs and then decreased to 0.00001 after 100 epochs.

Thanks in advance!

@Fang-Haoshu
Copy link
Owner

Hi, the full network training and testing is now available at: https://github.com/graspnet/graspnet-baseline
Please check it out there

Best regards

@Fang-Haoshu
Copy link
Owner

Yes, we also only train for 10 epochs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants