-
Notifications
You must be signed in to change notification settings - Fork 10.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Qwen 2.5 VL #11483
Comments
I'm currently looking into Transformers' Qwen2.5VL implementation and waiting for the paper to drop so I can better assess the differences between Qwen2VL and Qwen2.5VL. 👀 |
cool |
I support this! |
Our world definitely needs this! |
Any progress on this? Who added support for Qwen 2 VL? |
qwen2.5-vl report is up! https://huggingface.co/papers/2502.13923 edit: official codebase here: https://github.com/QwenLM/Qwen2.5-VL |
I can start working on this if no one else is already. |
OK then! First order of business would be to build the GGUF file(s). Seems there is an issue with that and the latest official Transformers:
This is pretty hot: Appears a temporary workaround would be to use the old Qwen2 templates. People are reporting this works, so I'll post an update in a bit. |
Right, so this one is a bit of a rabbit hole... I. Reverting the Qwen2.5 config files to:
and
Produces a (seemingly) working model! We've started testing and quantizing it here: II. In order to get a usable experience, you need to make sure CLIP is running with hardware acceleration. This currently requires you to revert this commit: For more information refer to: The following PR seems to correct (at least) some of the issues that led to disabling hardware acceleration in the first place: So, it is now up to us to prove that everything is working properly. I'll start a stress / perf eval test alongside the quantization process, so we have a better idea about what's going on. |
UPDATE: A few 4-bit quants have been uploaded, including two that support online auto-repacking. The latest main looks stable with Vulkan CLIP and any model thrown at it so far. Some preliminary insights:
Output quality looks very promising! We'll release all of the benchmark code when ready, so the process can be streamlined for other models. |
Hi! Excelent news, thank you very much for this! I was able to run the model by using code from git main on a 4 x Radeon 7900 XTX 24 GB workstation, but using Clip on CPU. I tried to enable Vulkan acceleration for Clip by uncommenting the lines on clip.cpp under examples, but in that case I get OOM. I tried this with models FP16, Q4K_M and IQ4_XS. Specifying the cli to just use one Vulkan device does not help on the OOM / Clip GPU issue either. |
Hi, could you please confirm what the resolution of your input images is? EDIT: As per Qwen2.5 docs: A RTFM moment for me... |
Thanks. My image was 1475x1062. I was able to run inference successfuly using a 1077x671 sample, without OOM. Would it be possible to run Clip and VL on separate GPUs? Thanks again. |
Thank you very much for your research and sharing! I would like to ask how to get mmproj from Qwen2.5-VL model? The original qwen2_vl_surgery.py used for Qwen2-VL doesn't seem to work, could you share your method? Thank you very much! |
Get it from our HF: |
Thank you for the effort, a lot of people really need this. Any updates on the progress? Will this still take a few days? or is it more like a few weeks or months? Thanks a lot again, we appreciate you guys a lot!. |
@vladislavdonchev Great work! Have you done the 3B version? I can also do it myself if you provide the conversion script :) |
Working on it as we speak, along with a quantization tool: |
UPDATE: Opened a draft PR here: #12119 Long story short, I'll need some help debugging the vision models and llama-qwen2vl-cli as we're unable to produce anything reliably. In addition, this still isn't resolved: I've also asked the Qwen folks for help: |
Thanks @vladislavdonchev for the effort and the update. I took a look at the issue you opened with the qwen team, is it only affecting the 3B model? Can we expect at least progress to continue with 7b? Thank you! |
Unfortunately, we're unable to reliably produce a working vision model from either 7B or 3B. I am not sure how the one in the repo was exported, but it seems to be working, so it's either some weird coincidence or a mistake. I've verified the LM part, including in quants and it also appears to match what you'd expect from Qwen2.5 (parameters in .gguf seem correct, responses are OK). |
Prerequisites
Feature Description
Is anybody implementing this?
If not, I may give it a go. But it will take some time as I am new to the source side of llama.cpp/ggml.
Motivation
Well, it's not currently working. :-)
Possible Implementation
Based on the existing Qwen 2 VL implementation.
The text was updated successfully, but these errors were encountered: