Skip to content

feat: per-agent LLM configuration (model and provider key) #355

@jhezjkp

Description

@jhezjkp

Summary

Currently, model routing and LLM provider API keys are configured at the instance level and shared across all agents. There is no way to assign a specific model or provider key to an individual agent.

Problem

Model routing is global

The worker resolves its model without any agent context:

// src/agent/worker.rs
let model_name = routing.resolve(ProcessType::Worker, None).to_string();
//                                                          ^^^^ no agent context

All agents in the instance use the same model, determined solely by the global routing config.

Provider API keys are global

LLM provider keys are defined once under [llm.provider.*] and apply to every agent. There is no way to assign a per-agent billing key or use a different provider for a specific agent.

Expected Behavior

Agent configuration should support optional LLM overrides:

[agents.my-agent]
# override the global default model for this agent
model = "gpt-4o-mini"

# override the provider API key for this agent
llm_api_key = "secret:MY_AGENT_OPENAI_KEY"

When set, these values take precedence over the global routing and provider config for that agent only. Agents without overrides continue to use the global defaults.

Use Cases

  • Different agents have different performance or cost requirements (lightweight tasks → small model, complex tasks → large model)
  • Multiple users sharing one instance each want to use their own API key and preferred model
  • Gradual model migration: test a new model on one agent before rolling it out globally

Related

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions