Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training in 8GPU with lower performance #48

Open
ZCMax opened this issue Mar 28, 2025 · 1 comment
Open

Training in 8GPU with lower performance #48

ZCMax opened this issue Mar 28, 2025 · 1 comment

Comments

@ZCMax
Copy link

ZCMax commented Mar 28, 2025

Thanks for your great work and your open-sourced code, I have tried to train the configs/ESAM-E_CA/ESAM-E_sv_scannet200_CA.py on 1GPU and 8GPU respectively with lr=1e-4 and 8e-4 respectively.

Here are the results:

1 GPU with lr=1e-4

+---------+---------+---------+--------+
| classes | AP_0.25 | AP_0.50 | AP     |
+---------+---------+---------+--------+
| object  | 0.8941  | 0.7831  | 0.5758 |
+---------+---------+---------+--------+
| Overall | 0.8941  | 0.7831  | 0.5758 |
+---------+---------+---------+--------+

8GPU with lr=8-4

+---------+---------+---------+--------+
| classes | AP_0.25 | AP_0.50 | AP     |
+---------+---------+---------+--------+
| object  | 0.8853  | 0.7684  | 0.5508 |
+---------+---------+---------+--------+
| Overall | 0.8853  | 0.7684  | 0.5508 |
+---------+---------+---------+--------+

Do you have any suggestions for maintaining the performance when training on multiple gpus to accelerate the training speed?

@ZCMax ZCMax changed the title Training in 8GPU without lower performance Training in 8GPU with lower performance Mar 28, 2025
@xuxw98
Copy link
Owner

xuxw98 commented Apr 6, 2025

Hi,
Thanks for your interest! We are sorry that we have not tried to train ESAM with more than 4 GPUs. I think you can try more learn rates and fix other hyperparameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants