Is there multi-gpu versions for inference ? #780
-
HI, all Best, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi Gang, CUDA_VISIBLE_DEVICES=0 nnUNet_predict [...] --part_id 0 --num_parts 2
CUDA_VISIBLE_DEVICES=1 nnUNet_predict [...] --part_id 1 --num_parts 2 (run these two at the same time) |
Beta Was this translation helpful? Give feedback.
Hi Gang,
you can use multiple GPUs for inference by parallelizing the inference on a per-data level.
Example:
(run these two at the same time)
This will run half of the images on GPU0 and the other half on GPU1.