You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Llama3.x models ran through ipex-llm's ollama do not work with LangChain chat/tool integration.
Here's a minimal example of a chat model with tools:
# pip install langchain langchain-ollama# ollama pull llama3.2:3b-instruct-q4_K_Mfromlangchain_core.toolsimporttoolfromlangchain_ollama.chat_modelsimportChatOllama@tooldefget_weather(location: str):
"""Call to get the current weather."""iflocation.lower() in ["sf", "san francisco"]:
return"It's 60 degrees and foggy."else:
return"It's 90 degrees and sunny."model=ChatOllama(
model="llama3.2:3b-instruct-q4_K_M",
num_predict=50, # limit number of tokens to stop hallucination
)
tools= [get_weather]
model_with_tools=model.bind_tools(tools)
res=model_with_tools.invoke("what's the weather in sf?")
res.pretty_print()
This example runs correctly with standard ollama. Expected response with tool arguments:
================================== Ai Message ==================================
Tool Calls:
get_weather (328879e5-247c-48d1-9013-39f0e1b65539)
Call ID: 328879e5-247c-48d1-9013-39f0e1b65539
Args:
location: sf
Actual output shows that the model just hallucinates:
================================== Ai Message ==================================
I hope that a new
$ has several times this would be =_._ _-level was an item 8/<< is not only to the best
To view= (or is a significant but also knows are still allow [or)
Tested with:
Ubuntu 22.04.5 LTS
oneapi/2025.0
python 3.10.0
ipex-llm 2.2.0b20250105, 2.2.0b20250123
langchain 0.3.17
langchain-ollama 0.2.3
GPU: Intel(R) Data Center GPU Max 1100
The text was updated successfully, but these errors were encountered:
Llama3.x models ran through ipex-llm's ollama do not work with LangChain chat/tool integration.
Here's a minimal example of a chat model with tools:
This example runs correctly with standard ollama. Expected response with tool arguments:
Actual output shows that the model just hallucinates:
================================== Ai Message ================================== I hope that a new $ has several times this would be =_._ _-level was an item 8/<< is not only to the best To view= (or is a significant but also knows are still allow [or)
Tested with:
Ubuntu 22.04.5 LTS
oneapi/2025.0
python 3.10.0
ipex-llm 2.2.0b20250105, 2.2.0b20250123
langchain 0.3.17
langchain-ollama 0.2.3
GPU: Intel(R) Data Center GPU Max 1100
The text was updated successfully, but these errors were encountered: