Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Finalize Ollama model provider integration #11

Open
og-hayden opened this issue Jan 10, 2025 · 3 comments
Open

[FEATURE] Finalize Ollama model provider integration #11

og-hayden opened this issue Jan 10, 2025 · 3 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@og-hayden
Copy link
Collaborator

Problem Statement

legion/providers/ollama.py should be updated to ensure the four primary modes of Agent operation are working with models running with Ollama, those being:

  1. Text in -> text out
  2. Text in -> text out (with tool calls)
  3. Text in -> JSON out
  4. Text in -> JSON out (with tool calls)

Proposed Solution

  1. Use the current OpenAI provider as a starting point, as that is working correctly. The primary difference between the OpenAI provider and the Ollama provider is purely expectations of their respective APIs, so refer to the [Ollama API docs] (https://github.com/ollama/ollama-python).
  2. Create a new test file in tests/providers modeled after tests/providers/test_openai.py with integration tests for the Ollama model provider.
  3. Write a script defining an agent using an Ollama-supported model and testing it to ensure it's working within the Legion component system.

Alternative Solutions

n/a

Additional Context

Sometimes model providers have their own API docs, but also offer a different set of endpoints that are OpenAI compatible. If that is the case, then consider using those endpoints as OpenAI's API is the most simple and straightforward to use.

The Ollama python library was designed to be very similar to OpenAI's, so there should be limited differences.

Implementation Ideas

Benefits

Serves Legion's goal of being provider agnostic.

Potential Challenges

This ticket does require having Ollama installed on your system, as well as some hardware requirements for running a minimum of a 7b parameter model (16GB+ ram recommended). For installation instructions, visit [Ollama Install Instructions] (https://ollama.com/)

@og-hayden og-hayden added enhancement New feature or request good first issue Good for newcomers labels Jan 10, 2025
@djnatronic
Copy link

I've got Ollama and i've connected to it using Node

Are you thinking something like this but python?

class ModelService {
constructor(model = 'llama3:8b') {
this.model = model;
this.baseUrl = 'http://127.0.0.1:11434/api';
this.port = 3007;
this.app = express();
this.setupLogging();
this.setupRoutes();
}

setupLogging() {
    this.logger = winston.createLogger({
        level: 'info',
        format: winston.format.json(),
        transports: [
            new winston.transports.Console(),
            new winston.transports.File({ filename: 'model-service.log' })
        ]
    });
}

setupRoutes() {
    this.app.use(express.json());
    
    this.app.post('/generate', async (req, res) => {
        try {
            const { prompt, params = {} } = req.body;
            const response = await this.generate(prompt, params);
            res.json(response);
        } catch (error) {
            this.logger.error('Generation error:', error);
            res.status(500).json({ error: error.message });
        }
    });

@Tanner-Perham
Copy link

I'm going to take this on!

I've got the same suite of tests as the OpenAI Provider passing for the Ollama Provider now. Just implementing an example agent and troubleshooting.

@Tanner-Perham
Copy link

Pull Request now available for review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants