-
-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gradio chat exception #103
Comments
I modified the chat function in chat_ui.py like this: def chat(message, history, temperature, max_tokens):
chat = []
if len(message["files"]) >= 1:
chat.append(message["text"])
else:
raise gr.Error("Please upload an image. Text only chat is not supported.")
files = message["files"][-1]
if model.config.model_type != "paligemma":
messages = apply_chat_template(processor, config, message["text"], num_images=1)
else:
messages = message["text"]
response = ""
for chunk in stream_generate(
model, processor, files, messages, image_processor, max_tokens, temp=temperature
):
response += chunk
yield response Seems to work. |
I also modified that function like that, but then I got other kind of errors:
Did you also modify that file?
|
I added a string type check for
|
hi all,
I had the following exception when trying to run the gradio example with:
python -m mlx_vlm.chat_ui --model mlx-community/Qwen2-VL-72B-Instruct-4bit
when using the CLI example with:
python -m mlx_vlm.generate --model mlx-community/Qwen2-VL-72B-Instruct-4bit --max-tokens 100 --temp 0.0 --image http://images.cocodataset.org/val2017/000000039769.jpg
there's no exception.I installed the package with:
pip install mlx-vlm
and have tried python 3.12 and python 3.10 with and got the same resultThe text was updated successfully, but these errors were encountered: