Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concerns about loss_u #13

Open
HaHaHaHaWang opened this issue Mar 13, 2024 · 3 comments
Open

Concerns about loss_u #13

HaHaHaHaWang opened this issue Mar 13, 2024 · 3 comments

Comments

@HaHaHaHaWang
Copy link

Thanks for your work.
I am a beginner, so I'm quite puzzled by the following issue.
I trained models using three different algorithms: fixmatch, comatch, and fixmatch_ccssl, following the programs you provided. I noticed that at the beginning of training, the loss_u is always 0 and starts increasing from there. Is this normal? I can't determine if this is due to errors in my modifications.

@KaiWU5
Copy link
Collaborator

KaiWU5 commented Apr 18, 2024

This is normal. At the initial training stage, the model is noisy and less capable of generating pseudo labels. So in semi-supervised learning, we usually first train supervised data for several steps and then train for unsupervised steps.

@HaHaHaHaWang
Copy link
Author

Thank you for your reply. I have recently reproduced the experiment of FixMatchCCSSL on STL10, but I cannot achieve the results in your paper. Would you be willing to make public the Config of FixMatchCCSSL on STL10?

@KaiWU5
Copy link
Collaborator

KaiWU5 commented Jul 11, 2024

Thanks for your attention to our work. It's been a while since we published the paper, I will try to find or reproduce the model for STL10. It could take a while since most of my effort now switches to large model training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants