Skip to content

Is there multi-gpu versions for inference ? #780

Answered by FabianIsensee
fgbxx asked this question in Q&A
Discussion options

You must be logged in to vote

Hi Gang,
you can use multiple GPUs for inference by parallelizing the inference on a per-data level.
Example:

CUDA_VISIBLE_DEVICES=0 nnUNet_predict [...] --part_id 0 --num_parts 2
CUDA_VISIBLE_DEVICES=1 nnUNet_predict [...] --part_id 1 --num_parts 2

(run these two at the same time)
This will run half of the images on GPU0 and the other half on GPU1.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Peaceandmaths
Comment options

Answer selected by FabianIsensee
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants