Multi-GPU support? #1621
-
Hi there, I have multiple GPUs in my machine and would like to saturate them all with WebU, e.g. to run the inference in parallel for the same prompt etc. Has anyone done that? What would be a good entry-point for parallelization? process_images in processing.py ? |
Beta Was this translation helpful? Give feedback.
Replies: 21 comments 20 replies
-
#644 |
Beta Was this translation helpful? Give feedback.
-
@NickLucche Have you pursued the idea of implementing such support for this repository? |
Beta Was this translation helpful? Give feedback.
-
I poked at this enough to get it working for my use case, though it's not PR quality code. It's very ugly and hacky, but it does wonders for inference speed. If someone wants to use it as a starting point or to pull ideas you can see the changes here: TikiTDO@2619a99 I might come back to it someday to clean it up, though that may be a challenge with the way the repo is structured. |
Beta Was this translation helpful? Give feedback.
-
There's a version of SD that does do this. But when I checked it out a couple weeks ago it didn't seem to have a very user friendly interface. |
Beta Was this translation helpful? Give feedback.
-
Someone (don't know who) just posted a bounty-type paid-for job to get this feature implemented into Automatic1111. Might be a good way to earn some pocket money if anyone is up to the task. |
Beta Was this translation helpful? Give feedback.
-
very cool, looking forward to this feature |
Beta Was this translation helpful? Give feedback.
-
Not to nag, but any update on this? Is it currently still being worked on? If not I may take a crack at it if I have time. |
Beta Was this translation helpful? Give feedback.
-
Also wondering if there has been any more movement in this area? |
Beta Was this translation helpful? Give feedback.
-
i also would love to see this soon and donate 1..2..3 bugs!! |
Beta Was this translation helpful? Give feedback.
-
I'm interested in this feature too, hope it's still being considered/worked on |
Beta Was this translation helpful? Give feedback.
-
pleeeaaassse dewit! |
Beta Was this translation helpful? Give feedback.
-
interested! |
Beta Was this translation helpful? Give feedback.
-
StableSwarmUI support this out of box. The ex employee of Stability.ai made it. |
Beta Was this translation helpful? Give feedback.
-
Does SD support this feature yet, everyone? |
Beta Was this translation helpful? Give feedback.
-
I hope this will be supported one day. |
Beta Was this translation helpful? Give feedback.
-
I believe the most likely to be able to do this would be the multidiffusion dev, based on the fact that his extension splits the image into tiles for processing, similar to what the ultimate sd upscaler does, and that they could also implement this. In multidiffusion, you can set a number of batches, and I think would be possible to throw each one onto a GPU. |
Beta Was this translation helpful? Give feedback.
-
Have you considered using GPUs only for the inference step, for example with accelerate? |
Beta Was this translation helpful? Give feedback.
-
With the arrival of Flux, this becomes more important. Flux maxes out vram, swapping models in and out during inference to avoid oom, which slows things down a lot. |
Beta Was this translation helpful? Give feedback.
-
yes very important |
Beta Was this translation helpful? Give feedback.
-
Bump. This IS important. |
Beta Was this translation helpful? Give feedback.
#644
#156
@NickLucche appears to be working on something for this