You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi- thanks for creating this package! I'm testing out some local vision workflows with ollama and llama3.2-vision. I seem to get an error when using the content_image_url function to fetch an image:
from chatlas import ChatOllama, content_image_url, content_image_file
# Initialize chat with ollama llama3.2-vision
chat = ChatOllama(model="llama3.2-vision")
# Using content_image_url
chat.chat(
"What do you see in this image?",
content_image_url("https://www.python.org/static/img/python-logo.png"),
echo="all"
)
Throws this error:
👤 User turn:
What do you see in this image?
<< 👤 other content >>
🌆 python-logo.png
🤖 Assistant turn:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "\.venv\lib\site-packages\chatlas\_chat.py", line 356, in chat
for _ in response:
File "\.venv\lib\site-packages\chatlas\_chat.py", line 1169, in __next__
chunk = next(self._generator)
File "\.venv\lib\site-packages\chatlas\_chat.py", line 833, in _chat_impl
for chunk in self._submit_turns(
File "\.venv\lib\site-packages\chatlas\_chat.py", line 884, in _submit_turns
response = self.provider.chat_perform(
File "\.venv\lib\site-packages\chatlas\_openai.py", line 247, in chat_perform
return self._client.chat.completions.create(**kwargs) # type: ignore
File "\.venv\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "\.venv\lib\site-packages\openai\resources\chat\completions.py", line 829, in create
return self._post(
File "\.venv\lib\site-packages\openai\_base_client.py", line 1280, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "\.venv\lib\site-packages\openai\_base_client.py", line 957, in request
return self._request(
File "\.venv\lib\site-packages\openai\_base_client.py", line 1061, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'invalid image input', 'type': 'invalid_request_error', 'param': None, 'code': None}}
The content_image_url("https://www.python.org/static/img/python-logo.png") seems to fetch the image and store it as a ContentImageRemote object.
Also, this works fine with content_image_file function:
chat.chat(
"What do you see in this image?",
content_image_file("python-logo.png"),
echo="all"
)
👤 User turn:
What do you see in this image?
<< 👤 other content >>
🌆 Dw2mJn+YhmuMAAAAAElFTkSuQmCC
🤖 Assistant turn:
The image features the Python logo, which is a stylized representation of the
word "python" and serves as an emblem for the official implementation of the
Python programming language.
Key Features:
• Color scheme:
• Primary color: Blue
• Secondary color: Yellow
• Positioning: Left side of the screen
Purpose: The logo is used to represent the official Python programming
language, providing a clear visual identifier for users.
I've tested this with version 0.2.0 and version 0.2.1.dev1+g6e81eb3 of chatlas.
The text was updated successfully, but these errors were encountered:
Hi- thanks for creating this package! I'm testing out some local vision workflows with ollama and llama3.2-vision. I seem to get an error when using the
content_image_url
function to fetch an image:Throws this error:
The
content_image_url("https://www.python.org/static/img/python-logo.png")
seems to fetch the image and store it as aContentImageRemote
object.Also, this works fine with
content_image_file
function:I've tested this with version 0.2.0 and version 0.2.1.dev1+g6e81eb3 of chatlas.
The text was updated successfully, but these errors were encountered: