Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about training details. #5

Open
rosieyeon opened this issue Aug 28, 2020 · 1 comment
Open

Questions about training details. #5

rosieyeon opened this issue Aug 28, 2020 · 1 comment

Comments

@rosieyeon
Copy link

Hello, thanks for your impressive work.
I am trying to reproduce the results of source only, AdaptSeg, and proposed method on C-Driving benchmarks.
I checked the appendices (C.2.Training details), but there are some points unclear to me.
I’d really appreciate your kind reply.

  1. Which initial weights did you use for training each methods?
    Random initialization or vgg16_bn provided by torchvision or any other?

  2. Did you use similar training process on C-Driving benchmarks as in the OCDA of classification tasks?
    Specifically, is the overall process as follows?
    (1) Train source net
    (2) Compute class centroids from trained source net
    (3) Fine-tune the model, which is initialized from source model(1), with fixed centroids and curriculum learning.

  3. When you construct visual memory, did you average all the features belonging to the same category at once or firstly average the features of same category in each image?

@XingangPan
Copy link

@seyeon956 Thanks for your interest in our work. Here are my answers:

  1. We use vgg16_bn provided by torchvision.
  2. Yes. The overall process is as what you said.
  3. We average all the features belonging to the same category at once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants