Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 6 additions & 11 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -18,18 +18,13 @@ POSTGRES_DATABASE=raganything
# === MinIO ===
MINIO_ENDPOINT=localhost:9040
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
MINIO_SECRET_KEY=your-minio-secret-key
MINIO_BUCKET=composable-agents
MINIO_SECURE=false

# === Tracing ===
TRACING_PROVIDER=none
TRACING_ENABLED=false
TRACING_PROJECT_NAME=composable-agents
LANGFUSE_HOST=https://cloud.langfuse.com
LANGFUSE_PUBLIC_KEY=
LANGFUSE_SECRET_KEY=
PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006
PHOENIX_API_KEY=
LANGCHAIN_API_KEY=
LANGCHAIN_PROJECT=composable-agents
PROVIDER=phoenix # or "langfuse" or "none"
PROJECT_NAME=composable-agents
PHOENIX_COLLECTOR_ENDPOINT=https://phoenix.soludev.tech/
PHOENIX_API_KEY=your-phoenix-api-key

4 changes: 3 additions & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,9 @@ jobs:
key: venv-${{ runner.os }}-${{ hashFiles('**/uv.lock') }}

- name: Install dependencies
run: uv sync --dev
run: |
uv sync --dev
uv sync --dev --extra phoenix

- name: Setup environment file
run: cp .env.example .env
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,5 @@ trivy-report-fixed.json
coverage.xml
.coverage
.scannerwork/
opencode.json
.vscode/
181 changes: 168 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,7 @@ cp .env.example .env
Edit `.env` and add your API key and database credentials:

```dotenv
ANTHROPIC_API_KEY=sk-ant-...
# or
OPENAI_API_KEY=sk-...
# or
GOOGLE_API_KEY=...

# PostgreSQL (required)
POSTGRES_HOST=localhost
Expand Down Expand Up @@ -62,7 +58,7 @@ uv run python -m src validate agents/my-agent.yaml
### Launch the server

```bash
uv run python -m src serve
uv run python -m src.main serve
```

The API starts on `http://localhost:8000`. On startup, the server:
Expand Down Expand Up @@ -281,6 +277,9 @@ All endpoints are prefixed appropriately. The server runs on `http://localhost:8
| `GET` | `/api/v1/agents` | List all agent configs from `agents/` directory | `200` |
| `GET` | `/api/v1/agents/{agent_name}` | Get a specific agent configuration | `200` |
| `WS` | `/api/v1/ws/{thread_id}` | WebSocket endpoint for streaming chat | -- |
| `POST` | `/prompts/create` | Create a new prompt | `200` |
| `GET` | `/prompts/get/{identifier}` | Get a specific prompt by identifier, version, or tag | `200` |
| `PUT` | `/prompts/update/{identifier}` | Update an existing prompt (creates new version) | `200` |

### Error Responses

Expand Down Expand Up @@ -570,6 +569,127 @@ curl -X DELETE http://localhost:8000/api/v1/threads/a1b2c3d4-e5f6-7890-abcd-ef12

Response: `204 No Content`

### 14. Prompt Management

Prompts are managed via a dedicated registry backed by Phoenix. Enable prompt management by setting `TRACING_PROVIDER=phoenix` and `PHOENIX_PROMPT_ENABLED=true` in your `.env`.

#### 14.1 Create a Prompt

```bash
curl -X POST http://localhost:8000/prompts/create \
-H "Content-Type: application/json" \
-d '{
"identifier": "customer-support",
"content": [
{
"role": "system",
"content": "You are a helpful customer support agent. Be polite and professional."
}
],
"model_name": "claude-sonnet-4-5-20250929",
"description": "Prompt for general customer support queries",
"tags": ["production"],
"metadata": {"project_name": "composable-agents", "agent_type": "deep_agent"}
}'
```

Response (`200`):

```json
{
"status": "success",
"prompt": {
"identifier": "customer-support",
"description": "Prompt for general customer support queries",
"current_version": {
"version_id": "v1",
"content": [...],
"model_name": "claude-sonnet-4-5-20250929",
"created_at": "2025-01-15T10:30:00.000000",
"tags": ["support", "production"]
},
"created_at": "2025-01-15T10:30:00.000000",
"updated_at": "2025-01-15T10:30:00.000000"
}
}
```

#### 14.2 List All Prompts

```bash
curl http://localhost:8000/prompts/customer-support
```

Optional query parameters:
- `version_id`: Get a specific version
- `tag`: Get the prompt with a specific tag

Response (`200`):

```json
{
"status": "success",
"prompt": {
"identifier": "customer-support",
"description": "",
"current_version": {
"version_id": "UHJvbXB0VmVyc2lvbjo4Mw==",
"content": [
{
"role": "system",
"content": "You are a helpful customer support agent. Be polite and professional."
}
],
"model_name": "claude-sonnet-4-5-20250929",
"tags": []
},
"created_at": null,
"updated_at": null
}
}
```

#### 14.3 Update a Prompt

Create a new version of an existing prompt:

```bash
curl -X PUT http://localhost:8000/prompts/update/customer-support \
-H "Content-Type: application/json" \
-d '{
"content": [
{
"role": "system",
"content": "You are a knowledgeable customer support agent. Be polite, professional, and thorough in your responses."
}
],
"model_name": "claude-sonnet-4-5-20250929",
"description": "Updated prompt for customer support (more detailed)",
"tags": ["production"],
"metadata": {"project_name": "composable-agents", "agent_type": "deep_agent"}
}'
```

Response (`200`):

```json
{
"status": "success",
"prompt": {
"identifier": "customer-support",
"description": "Updated prompt for customer support (more detailed)",
"current_version": {
"version_id": "v2",
"content": [...],
"model_name": "claude-sonnet-4-5-20250929",
"created_at": "2025-01-15T10:31:00.000000",
"tags": ["support", "production"]
}
},
"message": "Prompt 'customer-support' updated successfully"
}
```

### WebSocket

Connect to the WebSocket endpoint and send JSON messages:
Expand All @@ -588,6 +708,49 @@ ws.onmessage = (event) => {

---

## Prompt Management Setup

To enable prompt management in Phoenix:

### 1. Install Optional Dependencies

```bash
uv sync --extra phoenix
```

Or add to `pyproject.toml`:
```toml
arize-phoenix-otel = ">=0.1.0"
openinference-instrumentation-langchain = ">=0.1.0"
httpx = ">=0.27.0"
```

### 2. Configure Environment Variables

Add to `.env`:

```dotenv
PROVIDER=phoenix
PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006
PHOENIX_PROMPT_ENABLED=true
PHOENIX_API_KEY=your-api-key-here
```

### 3. Architecture

Prompt management follows the **Clean Architecture** pattern:

- **Domain Entity** (`src/domain/entities/prompt.py`): `Prompt`, `PromptVersion`
- **Domain Port** (`src/domain/ports/prompt_manager.py`): `PromptManager` interface
- **Use Cases** (`src/application/use_cases/`): `CreatePromptUseCase`, `GetPromptUseCase`, `SearchPromptsUseCase`, `UpdatePromptUseCase`
- **Request DTOs** (`src/application/requests/prompt.py`): Request models for each endpoint
- **Routes** (`src/application/routes/prompts.py`): FastAPI endpoint handlers
- **Infrastructure Adapter** (`src/infrastructure/tracing/phoenix_prompt_manager.py`): Phoenix REST API implementation

All prompt management operations are async and fully integrated with the FastAPI dependency injection system.

---

## Architecture

composable-agents follows a strict **hexagonal architecture** (ports and adapters). The domain layer has zero dependencies on frameworks or infrastructure.
Expand Down Expand Up @@ -902,9 +1065,7 @@ Configured via `.env` file or environment variables. See `.env.example`.
| Variable | Default | Description |
|---|---|---|
| `AGENTS_DIR` | `./agents` | Directory containing agent YAML configuration files. |
| `ANTHROPIC_API_KEY` | -- | API key for Anthropic models. |
| `OPENAI_API_KEY` | -- | API key for OpenAI models. |
| `GOOGLE_API_KEY` | -- | API key for Google models. |
| `OPENAI_BASE_URL` | `https://api.openai.com/v1` | Base URL for OpenAI-compatible endpoints. Set to use OpenRouter, LiteLLM, vLLM, etc. |
| `HOST` | `0.0.0.0` | Server bind host. |
| `PORT` | `8000` | Server bind port. |
Expand Down Expand Up @@ -936,15 +1097,9 @@ The async connection URL is built automatically as `postgresql+asyncpg://<user>:
| Variable | Default | Description |
|---|---|---|
| `TRACING_PROVIDER` | `none` | Tracing backend: `none`, `langfuse`, or `phoenix`. |
| `TRACING_ENABLED` | `false` | Enable/disable tracing. |
| `TRACING_PROJECT_NAME` | `composable-agents` | Project name for the tracing backend. |
| `LANGFUSE_HOST` | `https://cloud.langfuse.com` | Langfuse server URL. |
| `LANGFUSE_PUBLIC_KEY` | -- | Langfuse public key. |
| `LANGFUSE_SECRET_KEY` | -- | Langfuse secret key. |
| `PHOENIX_COLLECTOR_ENDPOINT` | `http://localhost:6006` | Phoenix collector endpoint. |
| `PHOENIX_API_KEY` | -- | Phoenix API key. |
| `LANGCHAIN_API_KEY` | -- | LangChain/LangSmith API key. |
| `LANGCHAIN_PROJECT` | `composable-agents` | LangChain/LangSmith project name. |

---

Expand Down
2 changes: 2 additions & 0 deletions agents/code-reviewer.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,10 @@ hitl:
- reject
subagents:
- name: security-auditor
model: "openai:anthropic/claude-haiku-4.5:nitro"
description: "Specialized in security vulnerability analysis"
instructions: "Focus on OWASP Top 10 and common security patterns"
- name: performance-analyst
model: "openai:anthropic/claude-haiku-4.5:nitro"
description: "Specialized in performance optimization"
instructions: "Analyze time complexity, memory usage, and bottlenecks"
5 changes: 3 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,13 @@ dependencies = [
"miniopy-async>=1.21.0",
"sqlalchemy[asyncio]>=2.0.0",
"alembic>=1.13.0",
"cachetools>=7.0.5",
]

[project.optional-dependencies]
langfuse = ["langfuse>=2.0.0"]
phoenix = ["arize-phoenix-otel>=0.1.0", "openinference-instrumentation-langchain>=0.1.0"]
tracing = ["langfuse>=2.0.0", "arize-phoenix-otel>=0.1.0", "openinference-instrumentation-langchain>=0.1.0"]
phoenix = ["arize-phoenix-otel>=0.1.0", "openinference-instrumentation-langchain>=0.1.0", "arize-phoenix-client>=2.3.0"]
tracing = ["langfuse>=2.0.0", "arize-phoenix-otel>=0.1.0", "openinference-instrumentation-langchain>=0.1.0", "arize-phoenix-client>=2.3.0"]

[dependency-groups]
dev = [
Expand Down
17 changes: 17 additions & 0 deletions src/application/requests/prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from pydantic import BaseModel, Field


class CreatePromptRequest(BaseModel):
identifier: str = Field(..., min_length=1)
content: list[dict[str, str]]
model_name: str
description: str | None = None
tags: list[str] | None = None
metadata : dict | None = None


class UpdatePromptRequest(BaseModel):
content: list[dict[str, str]] | None = None
model_name: str | None = None
description: str | None = None
metadata : dict | None = None
Loading
Loading