Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported 'qwen2_vl' VLM model type #1656

Closed
azhuvath opened this issue Jan 30, 2025 · 10 comments
Closed

Unsupported 'qwen2_vl' VLM model type #1656

azhuvath opened this issue Jan 30, 2025 · 10 comments

Comments

@azhuvath
Copy link

I tried quantizing the Qwen/Qwen2-VL-7B-Instruct and Qwen/Qwen2-VL-2B-Instruct model using the steps given in https://docs.openvino.ai/2024/notebooks/qwen2-vl-with-output.html . The quantization process works without issues.

I tried inferring the quantized model using openvino.genai VLM example. Getting the below error.

(ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> python .\visual_language_chat.py .\Qwen2-VL-2B-Instruct\ .\demo.jpeg

Traceback (most recent call last):
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\visual_language_chat.py", line 84, in
main()
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\visual_language_chat.py", line 64, in main
pipe = openvino_genai.VLMPipeline(args.model_dir, device, **enable_compile_cache)
RuntimeError: Exception from C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino.genai\src\cpp\src\visual_language/vlm_model_type.hpp:33:
Unsupported 'qwen2_vl' VLM model type

@Wovchena
Copy link
Collaborator

You are using older genai version. The one that supports qwen2_vl haven't reached release status yet. But you can use pre-release version: pip install openvino-genai==2025.0.0.0.rc3 --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release.

@azhuvath
Copy link
Author

You are using older genai version. The one that supports qwen2_vl haven't reached release status yet. But you can use pre-release version: pip install openvino-genai==2025.0.0.0.rc3 --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release.

Is there any example on how to quantize it and use it during inference? I tried following the sample given in this page https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/genai-guide.html

(ov_env) PS C:\Users\devcloud\Desktop\Scribbler\Qwen2VL> python .\visual_language_chat.py .\Qwen2-VL-2B-Instruct\ .\demo.jpeg
Traceback (most recent call last):
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\visual_language_chat.py", line 84, in
main()
File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\visual_language_chat.py", line 64, in main
pipe = openvino_genai.VLMPipeline(args.model_dir, device, **enable_compile_cache)
RuntimeError: Check 'ov_tokenizer || ov_detokenizer' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2022\b\repos\openvino.genai\src\cpp\src\tokenizer.cpp:196:
Neither tokenizer nor detokenzier models were provided

@Wovchena
Copy link
Collaborator

Weights compression is performed by optimum-cli, the example command is at https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#performing-visual-language-text-generation You can check help for other options, including quantization.

Usage example is here https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/visual_language_chat

"Neither tokenizer nor detokenzier models were provided" is caused by openvino version mismatch. Here's a refined command to fix this: pip install openvino==2025.0.0.0.rc3 openvino-tokenizers==2025.0.0.0.rc3 openvino-genai==2025.0.0.0.rc3 --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release. After that you need to export the model again using optimum-cli.

@azhuvath
Copy link
Author

Weights compression is performed by optimum-cli, the example command is at https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#performing-visual-language-text-generation You can check help for other options, including quantization.

Usage example is here https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/visual_language_chat

"Neither tokenizer nor detokenzier models were provided" is caused by openvino version mismatch. Here's a refined command to fix this: pip install openvino==2025.0.0.0.rc3 openvino-tokenizers==2025.0.0.0.rc3 openvino-genai==2025.0.0.0.rc3 --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release. After that you need to export the model again using optimum-cli.

The model conversion fails using optimum route. The issues link.
huggingface/optimum-intel#1137

@Wovchena
Copy link
Collaborator

Is it the latest optimum-intel?

@azhuvath
Copy link
Author

Is it the latest optimum-intel?

I executed pip install optimum-intel[openvino]

@Wovchena
Copy link
Collaborator

Apparently, they didn't release that version yet. Install from master: pip install git+https://github.com/huggingface/optimum-intel.git

@azhuvath
Copy link
Author

azhuvath commented Feb 3, 2025

Apparently, they didn't release that version yet. Install from master: pip install git+https://github.com/huggingface/optimum-intel.git

Not sure why I am getting this error.

(ov_env) PS C:\Program Files\Git\cmd> pip install git+https://github.com/huggingface/optimum-intel.git
Collecting git+https://github.com/huggingface/optimum-intel.git
Cloning https://github.com/huggingface/optimum-intel.git to c:\users\devcloud\appdata\local\temp\pip-req-build-uxi0lca1
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/optimum-intel.git 'C:\Users\devcloud\AppData\Local\Temp\pip-req-build-uxi0lca1'
Resolved https://github.com/huggingface/optimum-intel.git to commit 2e7c556256cd959bc92d6e01e7a7f29ae0aab92e
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [32 lines of output]
Traceback (most recent call last):
File "", line 18, in
File "C:\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Python310\lib\subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Python310\lib\subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Python310\lib\subprocess.py", line 1456, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module>   
      main()
    File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main       
      json_out["return_val"] = hook(**hook_input["kwargs"])
    File "C:\Users\devcloud\Desktop\Scribbler\Qwen2VL\ov_env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel
      return hook(config_settings)
    File "C:\Users\devcloud\AppData\Local\Temp\pip-build-env-hbgky9_z\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=[])
    File "C:\Users\devcloud\AppData\Local\Temp\pip-build-env-hbgky9_z\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires     
      self.run_setup()
    File "C:\Users\devcloud\AppData\Local\Temp\pip-build-env-hbgky9_z\overlay\Lib\site-packages\setuptools\build_meta.py", line 522, in run_setup
      super().run_setup(setup_script=setup_script)
    File "C:\Users\devcloud\AppData\Local\Temp\pip-build-env-hbgky9_z\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup
      exec(code, locals())
    File "<string>", line 27, in <module>
  AssertionError: Error: Could not open 'optimum/intel/version.py' due [WinError 2] The system cannot find the file specified

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

@Wovchena
Copy link
Collaborator

Wovchena commented Feb 3, 2025

That should work because I can't reproduce it on Windows with Python3.10 with the same commit python -m pip install git+https://github.com/huggingface/optimum-intel.git@2e7c556256cd959bc92d6e01e7a7f29ae0aab92e. Try upgrading pip and setuptools.

@azhuvath
Copy link
Author

azhuvath commented Feb 3, 2025

That should work because I can't reproduce it on Windows with Python3.10 with the same commit python -m pip install git+https://github.com/huggingface/optimum-intel.git@2e7c556256cd959bc92d6e01e7a7f29ae0aab92e. Try upgrading pip and setuptools.

It might be due to the VPN which I need to turn off and see.

@azhuvath azhuvath closed this as completed Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants