-
Notifications
You must be signed in to change notification settings - Fork 254
Description
Summary
Currently, model routing and LLM provider API keys are configured at the instance level and shared across all agents. There is no way to assign a specific model or provider key to an individual agent.
Problem
Model routing is global
The worker resolves its model without any agent context:
// src/agent/worker.rs
let model_name = routing.resolve(ProcessType::Worker, None).to_string();
// ^^^^ no agent contextAll agents in the instance use the same model, determined solely by the global routing config.
Provider API keys are global
LLM provider keys are defined once under [llm.provider.*] and apply to every agent. There is no way to assign a per-agent billing key or use a different provider for a specific agent.
Expected Behavior
Agent configuration should support optional LLM overrides:
[agents.my-agent]
# override the global default model for this agent
model = "gpt-4o-mini"
# override the provider API key for this agent
llm_api_key = "secret:MY_AGENT_OPENAI_KEY"When set, these values take precedence over the global routing and provider config for that agent only. Agents without overrides continue to use the global defaults.
Use Cases
- Different agents have different performance or cost requirements (lightweight tasks → small model, complex tasks → large model)
- Multiple users sharing one instance each want to use their own API key and preferred model
- Gradual model migration: test a new model on one agent before rolling it out globally
Related
- feat: per-agent secret namespacing / isolation #354 — per-agent secret store isolation (separate but related multi-tenancy concern)