Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the results showed in talbe 1 #11

Open
revaeb opened this issue Dec 19, 2021 · 6 comments
Open

About the results showed in talbe 1 #11

revaeb opened this issue Dec 19, 2021 · 6 comments

Comments

@revaeb
Copy link

revaeb commented Dec 19, 2021

Hi, sorry to bother you and thx for your sharing.
Could you plz tell me how to set the --train_split="${SPLIT}" when I want to reproduce your results shown in table1(the result of semi-supervised setting with 1.4k as labeled data)?
Should it be --train_split="8_clean"? Or this split is for the low-data setting?
Thanks for your help!

@Yuliang-Zou
Copy link

Hi @revaeb, just simply setting --train_split=train should be fine.

@revaeb
Copy link
Author

revaeb commented Dec 20, 2021

Hi @revaeb, just simply setting --train_split=train should be fine.

Thank you so much~!
And I wander have you ever done the experiments on the other semi-supervised data split(1/2,1/4,1/16) on VOC2012 dataset? I mean the augmented set is used as full training set and different amount of data(1/2,1/4,1/16) are used as the labeled data others are used as unlabeled data, then test on the val set?

@Yuliang-Zou
Copy link

Nope. I found that the annotation quality in the augmented set (9k) is quite bad. Adding part of the images from the augmented set to the 1.4k training set does not necessarily improve the result. So I think making splits using 1.4k+9k is not that useful and the results from there could be misleading.

@revaeb
Copy link
Author

revaeb commented Dec 22, 2021

Nope. I found that the annotation quality in the augmented set (9k) is quite bad. Adding part of the images from the augmented set to the 1.4k training set does not necessarily improve the result. So I think making splits using 1.4k+9k is not that useful and the results from there could be misleading.

Thank you for your reply!

@revaeb
Copy link
Author

revaeb commented Dec 24, 2021

Nope. I found that the annotation quality in the augmented set (9k) is quite bad. Adding part of the images from the augmented set to the 1.4k training set does not necessarily improve the result. So I think making splits using 1.4k+9k is not that useful and the results from there could be misleading.

Sorry to bother you again.
I think the results shown in table 2 are all based on ResNet backbone the PseudoSeg methods is included , am I making any mistakes?
If I understand it correctly, could you plz offer me the command line to use ResNet as backbone in low-data regime experiments? I was trying to use Xception_65 as backbone(used the command line you have offered), the performance seems really good, but when I tried to change the backbone as resnet, the performance degraded. I think it should be some of the parameters were not correctly set.
THX for your help again:)

@Yuliang-Zou
Copy link

Yuliang-Zou commented Jan 2, 2022

  1. Yes. Results in Table 2 are based on ResNet-101.
  2. I think you just need to simply replace the backbone and set the initialization checkpoint correctly, then it should be fine. But if you are using a small number of gpus, you may need to adjust the learning rate and freeze batch norm accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants