Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to support multi GPU trainning ? #3

Open
CarlHuangNuc opened this issue Mar 21, 2024 · 4 comments
Open

How to support multi GPU trainning ? #3

CarlHuangNuc opened this issue Mar 21, 2024 · 4 comments

Comments

@CarlHuangNuc
Copy link

Dear Author,

        Whether this base code can support Multi GPU training ??

I try to change this paramters ( parser.add_argument('--GPU', required=False, type=int, default=-1, help='ID of the GPU to use' )),, but only can using 1 GPU.

@SilvioGiancola
Copy link
Contributor

I don't think multi-GPU was implemented, you can easily train with a single commodity GPU. --GPU is meant for the ID of the GPU to train on, in case you have more than one. If you have suggestions for multi-GPU training, feel free to make changes and do a pull request.

@CarlHuangNuc
Copy link
Author

Thanks for your quick response. whether one Nvdia V100 GPU can run trainning process ?

@SilvioGiancola
Copy link
Contributor

I think a v100 will be enough, yes. @heldJan can confirm :)

@heldJan
Copy link
Collaborator

heldJan commented Mar 24, 2024

Yes, a v100 is enough :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants