You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work based on representation. And this method does save more time than Emsemble, Core-set, or MC Dropout when querying informative samples.
However, I got some questions.
I find it quite time-consuming when using transductive representation learning, which uses all the images in the trainset. That can be a duplicate process when training on the next stages (15%, 20%, ... 40%) since the majority of labelset and unlabelset of the trainset hasn't changed.
And if we train 100 epochs for VAE and Discriminator, the task model has been trained more than 100 epochs since len(labelset) < len(trainset). I think splitting the task model from the VAE and Discriminator may be a better choice.
Hello.
Thanks for your great work based on representation. And this method does save more time than Emsemble, Core-set, or MC Dropout when querying informative samples.
However, I got some questions.
len(labelset) < len(trainset)
. I think splitting the task model from the VAE and Discriminator may be a better choice.CIFAR10 Train Process
Sincerely
I'm looking forward to your advice.
Thanks a lot.
The text was updated successfully, but these errors were encountered: