Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transductive representation learning is time-consuming #25

Open
Shuai-Xie opened this issue Jun 10, 2020 · 0 comments
Open

Transductive representation learning is time-consuming #25

Shuai-Xie opened this issue Jun 10, 2020 · 0 comments

Comments

@Shuai-Xie
Copy link

Shuai-Xie commented Jun 10, 2020

Hello.

Thanks for your great work based on representation. And this method does save more time than Emsemble, Core-set, or MC Dropout when querying informative samples.

However, I got some questions.

  • I find it quite time-consuming when using transductive representation learning, which uses all the images in the trainset. That can be a duplicate process when training on the next stages (15%, 20%, ... 40%) since the majority of labelset and unlabelset of the trainset hasn't changed.
  • And if we train 100 epochs for VAE and Discriminator, the task model has been trained more than 100 epochs since len(labelset) < len(trainset). I think splitting the task model from the VAE and Discriminator may be a better choice.

CIFAR10 Train Process

python main.py --cuda --dataset cifar10  --data_path /nfs/xs/Datasets/CIFAR10 \
--batch_size 128 --train_epochs 100 \
--latent_dim 32 --beta 1 --adversary_param 1
Iter 1000/39062, task loss: 0.213, vae: 3.931, dsc: 1.389:   3%|█ | 1046/39062 [21:38<13:05:31, 1.24s/it]

Sincerely

I'm looking forward to your advice.
Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant