-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running On Multiple GPUs #27
Comments
Without seeing the code it's difficult. |
Hi, still having the problem. I changed that line to equal a torch device of two gpus (passed in set_device). It still runs on 1 gpu. |
Sorry for the late response. |
GPU 0 with about 30000 MiB |
I have the same problem. Is there any solution? |
Hi I am running the image harmonization part of the model with a --train_stages 6 --max_size 350 and --lr_scale 0.5 to increase the quality of the images.
However, once I get to the 2 stage of the training, it crashes because of lack of CUDA memory. I altered the torch device for the model to accept more than 1 gpu (let's say gpus 0 and 1) and made changes to the model to be encapsulated in a DataParallel model so that it can run parallel on multiple GPUs. However, it still only runs on 1 GPU.
Do you have any suggestions to fix this issue?
The text was updated successfully, but these errors were encountered: