-
Notifications
You must be signed in to change notification settings - Fork 430
add Qwen Image support #851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Where/how does it crash with Vulkan? |
Testing it here, I get:
gdb shows just this:
I'll try on a debug build. @jeffbolznv , anything more specific I could check? |
@SeanTater @wbruna This is likely because GGML Vulkan doesn’t support im2col_3d. I’ve updated GGML, so you can pull the latest code and try again. |
@leejet , unfortunately a3a2b2d (with ggml 553c44706c ) crashes too: the last output lines
GDB backtrace
|
What are the src and dst types for the GET_ROWS that crashes? |
Interesting... the model files are qwen-image-Q4_0.gguf and Qwen2.5-VL-7B-Instruct-IQ4_XS.gguf . |
Thanks. We're missing the K quants but I don't think there's any reason for this. I'll add it. |
Please try ggml-org/llama.cpp#16235. |
After applying the change from Jeff's PR in llama.cpp to the ggml submodule in stable-diffusion.cpp, it does run, no crash. But I get garbed output, and even though it does recognize the devices:
.. and it swears it places them on VRAM ..
rocm-smi disagrees:
It does finish in 17 seconds per step as opposed to about 70 for successful CPU sampling, but I think that may be a red herring since the output is garbage and the GPU is idle |
After applying ggml-org/llama.cpp@9073a73 and ggml-org/llama.cpp#16235 , I got a broken image too:
![]() VAE tiling also crashes with a |
I'm seeing similar corruption. I'll try to debug it. |
I updated ggml to the latest commit and optimized the handling of embedding weights, so there’s no need to use k_quant’s get_rows. I’m not sure if this will fix the Vulkan issue. |
I don't think it's related to get_rows. Setting GGML_VK_DISABLE_FUSION=1 seems to fix it. I'll continue to narrow it down. |
Oops, I think I mixed up my experiments. I think it's forcing GGML_PREC_F32 for matrix-matrix multiplies that's fixing it. I don't know which multiplies, I just forced it for all of them. |
@wbruna Could you show me your detailed parameters for comparison? |
@leejet , testing b769da2 with the Lightning distill:
![]() The same command with ROCm, just replacing the binary, renders a black image. The ROCm build command is:
full ROCM output (Vulkan is identical except for the hardware info and the timings)
|
Hello @leejet , sorry for the delay My params are very simple and now fully reproducible every time. Here is the terminal log:
And here is the output, which is a black square: Only cuda is broken. CPU and Vulkan work fine. |
with Rocm i get also black image BUT the first step does an image. so its something with sampling i guess. maybe on cuda also since Rocm is similar to cuda or better part from cuda. So maybe try to generate an 1-step image with cuda and see if its not black |
I tried the q4_K_S quantization and reproduced the black image issue. It’s likely due to a precision problem in the CUDA computations related to q4_K_S. |
The full q8_0 model won't fit on my card, but the Pruning 13b q8_0 works on ROCm, too. |
i wonder what ops it could be besides mat mul, since it works on vulkan with q4ks |
I had to implement get_rows for this, I think cuda may also be missing this. BTW, its a shame that sd is still using the ggml path that doesn't automatically fallback on unsupported ops. |
In sd.cpp, if a type isn’t one of {GGML_TYPE_F16, GGML_TYPE_Q8_0, GGML_TYPE_Q5_1, GGML_TYPE_Q5_0, GGML_TYPE_Q4_1, GGML_TYPE_Q4_0}, it’ll first get converted to GGML_TYPE_F32 on the CPU before calling get_rows. |
@leejet @jeffbolznv I tried using q4_0 instead of q4_k_s and it worked on cuda!!! |
I tried using q2_k as well, and it also generated a black image. Maybe all the k-quants have this issue. |
If we can narrow it down to a specific offending tensor, maybe we can force convert that to a different type. edit2: it fails when i load a "q4_0" model, but it works when i manually set the wtype to convert to q4. |
I’ve located and fixed the issue. It’s working fine on my side now — you can test it again on your end @LostRuins @wbruna. q2_k with cuda
![]() |
common.hpp
Outdated
// The purpose of the scale here is to prevent NaN issues in certain situations. | ||
// For example, when using Vulkan without enabling force_prec_f32, | ||
// or when using CUDA but the weights are k-quants. | ||
float scale = 1.f / 128.f; | ||
x = ggml_scale(ctx, x, scale); | ||
x = net_2->forward(ctx, x); // [ne3, ne2, ne1, dim_out] | ||
x = ggml_scale(ctx, x, 1.f / scale); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious which part in the CUDA backend causes the issue here? I assume you are working around some FP overflow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s likely that ggml_mul_mat has a precision issue when the weights are k-quants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why did Jeff's ggml_mul_mat_set_prec
fix work for vulkan but not cuda, could cuda be ignoring that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cuda approach to matmul is pretty different (see #851 (comment)). Anecdotally it seems to be less prone to precision issues, but I guess it can still run into problems.
Alright i'm building all your latest changes on cuda and will let you know how it goes |
@LostRuins Are there still any issues on your side during testing? |
My previous build failed due to an unrelated issue, so I am rebuilding it again. It takes me 1 hour for each CUDA build lol. |
Yeah… building ggml with CUDA always takes a lot of time. |
@leejet seems to be working well now, thanks! ![]() |
It looks like this PR can be merged now. Thanks, everyone! |
Just a shoutout to anyone wanting to try this out with low-powered hardware - I am successfully running this on a 2019 Ryzen 5 3400G with an iGPU (Vega RX 11) with 16 GB of Ram allocated for VRAM and 24 GB of GTT memory under Ubuntu (on a total of 64 GB system ram) - on the Q4_0 quant I am generating the same image of a cat as above at 512 px for 504s. (as opposed to the 16.85s. achieved with the 4090 above). Using the Q8_0 quant at 1024 px, things become very slow - All 16 GB Vram used + 8 GB GTT and another 21 GB offloaded to system ram, so a total of ~ 45 GB memory used and around 330s/it. Thank you @leejet ! edit - it's running on Vulkan |
txt2img
img2img
Qwen Image Edit
#877