-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to Reproduce the Final Result #9
Comments
This may be a problem caused by the learning rate dropping to 0. |
Could you please provide the code or can you refer us to some link. Thanks for replying |
or adjust -> max_iterations |
hello,Is there a formula for calculating this max_iterations? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, We ran the code for 295 epochs. Below is the log after the run of the code. Please help us if we are missing something
epoch [294/295] train_loss 0.2000 supervised_loss 0.1954 consistency_loss 0.0012 train_iou: 0.9596 - val_loss 0.5416 - val_iou 0.6689 - val_SE 0.5690 - val_PC 0.6468 - val_F1 0.5644 - val_ACC 0.7565
We have done this modification for learning rate, as we were encountering the "RuntimeError: For non-complex input tensors, argument alpha must not be a complex number.". BAsed on the link provided by you in other issue
def adjust_learning_rate(optimizer, i_iter, len_loader, max_epoch, power, args):
lr = lr_poly(args.base_lr, i_iter, max_epoch*len_loader, power)
optimizer.param_groups[0]['lr'] = lr
if len(optimizer.param_groups) > 1:
optimizer.param_groups[1]['lr'] = lr * 10
return lr
lr_ = adjust_learning_rate(optimizer, iter_num, len(trainloader), max_epoch, 0.9, args)
The text was updated successfully, but these errors were encountered: