You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work and your open-sourced code, I have tried to train the configs/ESAM-E_CA/ESAM-E_sv_scannet200_CA.py on 1GPU and 8GPU respectively with lr=1e-4 and 8e-4 respectively.
Hi,
Thanks for your interest! We are sorry that we have not tried to train ESAM with more than 4 GPUs. I think you can try more learn rates and fix other hyperparameters.
Thanks for your great work and your open-sourced code, I have tried to train the
configs/ESAM-E_CA/ESAM-E_sv_scannet200_CA.py
on 1GPU and 8GPU respectively with lr=1e-4 and 8e-4 respectively.Here are the results:
1 GPU with lr=1e-4
8GPU with lr=8-4
Do you have any suggestions for maintaining the performance when training on multiple gpus to accelerate the training speed?
The text was updated successfully, but these errors were encountered: