You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
legion/providers/ollama.py should be updated to ensure the four primary modes of Agent operation are working with models running with Ollama, those being:
Text in -> text out
Text in -> text out (with tool calls)
Text in -> JSON out
Text in -> JSON out (with tool calls)
Proposed Solution
Use the current OpenAI provider as a starting point, as that is working correctly. The primary difference between the OpenAI provider and the Ollama provider is purely expectations of their respective APIs, so refer to the [Ollama API docs] (https://github.com/ollama/ollama-python).
Create a new test file in tests/providers modeled after tests/providers/test_openai.py with integration tests for the Ollama model provider.
Write a script defining an agent using an Ollama-supported model and testing it to ensure it's working within the Legion component system.
Alternative Solutions
n/a
Additional Context
Sometimes model providers have their own API docs, but also offer a different set of endpoints that are OpenAI compatible. If that is the case, then consider using those endpoints as OpenAI's API is the most simple and straightforward to use.
The Ollama python library was designed to be very similar to OpenAI's, so there should be limited differences.
Implementation Ideas
Benefits
Serves Legion's goal of being provider agnostic.
Potential Challenges
This ticket does require having Ollama installed on your system, as well as some hardware requirements for running a minimum of a 7b parameter model (16GB+ ram recommended). For installation instructions, visit [Ollama Install Instructions] (https://ollama.com/)
The text was updated successfully, but these errors were encountered:
Problem Statement
legion/providers/ollama.py
should be updated to ensure the four primary modes of Agent operation are working with models running with Ollama, those being:Proposed Solution
tests/providers
modeled aftertests/providers/test_openai.py
with integration tests for the Ollama model provider.Alternative Solutions
n/a
Additional Context
Sometimes model providers have their own API docs, but also offer a different set of endpoints that are OpenAI compatible. If that is the case, then consider using those endpoints as OpenAI's API is the most simple and straightforward to use.
The Ollama python library was designed to be very similar to OpenAI's, so there should be limited differences.
Implementation Ideas
Benefits
Serves Legion's goal of being provider agnostic.
Potential Challenges
This ticket does require having Ollama installed on your system, as well as some hardware requirements for running a minimum of a 7b parameter model (16GB+ ram recommended). For installation instructions, visit [Ollama Install Instructions] (https://ollama.com/)
The text was updated successfully, but these errors were encountered: