-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using other AI model Ollama #184
Comments
Hi @AlazaziAmr I believe that's the model fault, not anything specific caused by django-ai-assistant.
If you're using the example projects, set those at your .env file. |
Hi @fjsj, everything is set correctly, but the Ollama(with llama) can't find the specific function to do the work as the OpenAI. |
the specific ollama model you're using is really fine-tuned to perform function calling? @AlazaziAmr |
As mentioned in their website it supports tool calling. @fjsj |
Did you double-check with Langsmith or with logs the request made to the model? Are the function tools being passed? |
No the function tools are not passed |
What ChatModel are you using? Should be one that supports "Tool calling" from this table: https://python.langchain.com/docs/integrations/chat/#featured-providers Is it really ChatOllama? |
Yes, it seems right. Looks like it's a model problem. |
Yes, I'll try to dig deeper into it, and find the problem, and let you know about the updates. Thank you so much for your time, appreciate it. |
While using Ollama (with llama) the outputs/responses are not good and accurate as openai, is there a special code implementation in ai_assistants.py file to use Ollama(with llama)?
The text was updated successfully, but these errors were encountered: