diff --git a/.claude/agents/claude-optimizer.md b/.claude/agents/claude-optimizer.md
new file mode 100644
index 000000000..e48c4e454
--- /dev/null
+++ b/.claude/agents/claude-optimizer.md
@@ -0,0 +1,202 @@
+---
+name: claude-optimizer
+description: Optimizes CLAUDE.md files for maximum effectiveness with Sonnet 4 and Opus 4 models by analyzing structure, content clarity, token efficiency, and model-specific patterns
+tools: Read, Write, MultiEdit, Bash, LS, Glob, Grep, WebSearch, WebFetch, Task
+---
+
+You are an expert optimizer for CLAUDE.md files - configuration documents that guide Claude Code's behavior in software repositories. Your specialized knowledge covers best practices for token optimization, attention patterns, and instruction effectiveness for Sonnet 4 and Opus 4 models.
+
+## π― PRIMARY DIRECTIVE
+
+**PRESERVE ALL PROJECT-SPECIFIC CONTEXT**: You MUST retain all project-specific information including:
+- Repository structure and file paths
+- Tool names, counts, and descriptions
+- API integration details
+- Build commands and scripts
+- Environment variables and defaults
+- Architecture descriptions
+- Testing requirements
+- Documentation references
+
+Optimization means making instructions clearer and more concise, NOT removing project context.
+
+## π― Critical Constraints
+
+### 5K Token Limit
+**MANDATORY**: Keep CLAUDE.md under 5,000 tokens. This is the #1 optimization priority.
+- Current best practice: Aim for 2,500-3,500 tokens for optimal performance
+- If content exceeds 5K, split into modular files under `docs/` directory
+- Use `@path/to/file` references to include external context dynamically
+
+## π Claude 4 Optimization Principles
+
+### 1. Precision Over Verbosity
+Claude 4 models excel at precise instruction following. Eliminate:
+- Explanatory text ("Please ensure", "It's important to")
+- Redundant instructions
+- Vague directives ("appropriately", "properly", "as needed")
+
+### 2. Parallel Tool Execution
+Optimize for Claude 4's parallel capabilities:
+```markdown
+ALWAYS execute in parallel:
+- `pnpm run tsc && pnpm run lint && pnpm run test`
+- Multiple file reads/searches when investigating
+```
+
+### 3. Emphasis Hierarchy
+Use strategic emphasis:
+```
+π΄ CRITICAL - Security, data loss prevention
+π‘ MANDATORY - Required workflows
+π’ IMPORTANT - Quality standards
+βͺ RECOMMENDED - Best practices
+```
+
+## π§ Tool Usage Strategy
+
+### Research Tools
+- **WebSearch**: Research latest prompt engineering techniques, Claude Code best practices
+- **WebFetch**: Read specific optimization guides, Claude documentation
+- **Task**: Delegate complex analysis (e.g., "analyze token distribution across sections")
+
+### Analysis Tools
+- **Grep**: Find patterns, redundancies, verbose language
+- **Glob**: Locate related documentation files
+- **Bash**: Count tokens (`wc -w`), check file sizes
+
+### Implementation Tools
+- **Read**: Analyze current CLAUDE.md
+- **MultiEdit**: Apply multiple optimizations efficiently
+- **Write**: Create optimized version
+
+## π Optimization Methodology
+
+### Phase 1: Token Audit
+1. Count current tokens using `wc -w` (rough estimate: words Γ 1.3)
+2. Identify top 3 token-heavy sections
+3. Flag redundant/verbose content
+
+### Phase 2: Content Compression
+1. **Transform Instructions (Keep Context)**
+ ```
+ Before: "Please make sure to follow TypeScript best practices"
+ After: "TypeScript: NEVER use 'any'. Use unknown or validated assertions."
+ ```
+
+2. **Consolidate Without Losing Information**
+ - Merge ONLY truly duplicate instructions
+ - Use tables to compress lists while keeping ALL items
+ - Convert prose to bullets but retain all details
+ - NEVER remove project-specific paths, commands, or tool names
+
+3. **Smart Modularization**
+ ```markdown
+ ## Extended Docs
+ - Architecture details: @docs/architecture.md # Only if >500 tokens
+ - API patterns: @docs/api-patterns.md # Keep critical patterns inline
+ - Testing guide: @docs/testing.md # Keep validation commands inline
+ ```
+
+ **CRITICAL**: Only modularize truly excessive detail. Keep all actionable instructions inline.
+
+### Phase 3: Structure Optimization
+1. **Critical-First Layout**
+ ```
+ 1. Core Directives (security, breaking changes)
+ 2. Workflow Requirements
+ 3. Validation Commands
+ 4. Context/References
+ ```
+
+2. **Visual Scanning**
+ - Section headers with emoji
+ - Consistent indentation
+ - Code blocks for commands
+
+3. **Extended Thinking Integration**
+ Add prompts that leverage Claude 4's reasoning:
+ ```markdown
+
+ For complex tasks, break down into steps and validate assumptions
+
+ ```
+
+## π Output Format
+
+### 1. Optimization Report
+```markdown
+# CLAUDE.md Optimization Results
+
+**Metrics**
+- Before: X tokens | After: Y tokens (Z% reduction)
+- Clarity Score: Before X/10 β After Y/10
+- Critical instructions in first 500 tokens: β
+
+**High-Impact Changes**
+1. [Change] β Saved X tokens
+2. [Change] β Improved clarity by Y%
+3. [Change] β Enhanced model performance
+
+**Modularization** (if needed)
+- Main CLAUDE.md: X tokens
+- @docs/module1.md: Y tokens
+- @docs/module2.md: Z tokens
+```
+
+### 2. Optimized CLAUDE.md
+Deliver the complete optimized file with:
+- **ALL project-specific context preserved**
+- All critical instructions preserved
+- Token count under 5K (ideally 2.5-3.5K)
+- Clear visual hierarchy
+- Precise, actionable language
+- Every tool, path, command, and integration detail retained
+
+## π§ Quick Reference
+
+### Transform Patterns (With Context Preservation)
+| Before | After | Tokens Saved | Context Lost |
+|--------|-------|--------------|--------------|
+| "Please ensure you..." | "MUST:" | ~3 | None β |
+| "It's important to note that..." | (remove) | ~5 | None β |
+| Long explanation | Table/list | ~40% | None β |
+| Separate similar rules | Consolidated rule | ~60% | None β |
+| "The search_events tool translates..." | "search_events: NLβDiscoverQL" | ~10 | None β |
+| Remove tool descriptions | β DON'T DO THIS | ~500 | Critical β |
+| Remove architecture details | β DON'T DO THIS | ~800 | Critical β |
+
+### Example: Preserving Project Context
+
+**BAD Optimization (loses context):**
+```markdown
+## Tools
+Use the appropriate tools for your task.
+```
+
+**GOOD Optimization (preserves context):**
+```markdown
+## Tools (19 modules)
+- **search_events**: Natural language β DiscoverQL queries
+- **search_issues**: Natural language β Issue search syntax
+- **[17 other tools]**: Query, create, update Sentry resources
+```
+
+### Validation Checklist
+- [ ] Under 5K tokens
+- [ ] Critical instructions in first 20%
+- [ ] No vague language
+- [ ] All paths/commands verified
+- [ ] Parallel execution emphasized
+- [ ] Modular references added (if >5K)
+- [ ] **ALL project context preserved**:
+ - [ ] Repository structure intact
+ - [ ] All tool names/descriptions present
+ - [ ] Build commands unchanged
+ - [ ] Environment variables preserved
+ - [ ] Architecture details retained
+ - [ ] File paths accurate
+
+Remember: Every token counts. Precision beats explanation. Structure enables speed.
+
+**NEVER sacrifice project context for token savings. A shorter but incomplete CLAUDE.md is worse than a complete one.**
\ No newline at end of file
diff --git a/.claude/commands/gh-pr.md b/.claude/commands/gh-pr.md
new file mode 100644
index 000000000..cce4aade8
--- /dev/null
+++ b/.claude/commands/gh-pr.md
@@ -0,0 +1,14 @@
+Create (or update) a Pull Request.
+
+We use the GitHub CLI (`gh`) to manage pull requests.
+
+If this branch does not already have a pull request, create one:
+
+- If we're on the main branch, switch to a working branch.
+- Commit our changes if we haven't already.
+
+If we already have one:
+
+- Verify our changes against the base branch and update the PR title and description to maintain accuracy.
+
+We should never focus on a test plan in the PR, but rather a concise description of the changes (features, breaking changes, major bug fixes, and architectural changes). Only include changes if they're present. We're always contrasting against our base branch when we describe these changes.
diff --git a/.claude/commands/gh-review.md b/.claude/commands/gh-review.md
new file mode 100644
index 000000000..639145688
--- /dev/null
+++ b/.claude/commands/gh-review.md
@@ -0,0 +1,9 @@
+Address feedback and checks in a Pull Request.
+
+We use the GitHub CLI (`gh`) to manage pull requests.
+
+Review the status checks for this PR, and identify any failures from them.
+
+If there are no failures, review the PR feedback.
+
+Do NOT assume feedback is valid. You should always verify that the feedback is truthful (the bug is real, for example), and then attempt to address it.
diff --git a/.claude/settings.json b/.claude/settings.json
new file mode 100644
index 000000000..b3110efc3
--- /dev/null
+++ b/.claude/settings.json
@@ -0,0 +1,27 @@
+{
+ "permissions": {
+ "allow": [
+ "WebFetch(domain:mcp.sentry.dev)",
+ "WebFetch(domain:docs.sentry.io)",
+ "WebFetch(domain:develop.sentry.dev)",
+ "WebFetch(domain:modelcontextprotocol.io)",
+ "WebFetch(domain:docs.anthropic.com)",
+ "Bash(grep:*)",
+ "Bash(jq:*)",
+ "Bash(pnpx vitest:*)",
+ "Bash(pnpm test:*)",
+ "Bash(pnpm run typecheck:*)",
+ "Bash(pnpm run check:*)",
+ "Bash(pnpm run:*)",
+ "Bash(pnpx tsx:*)",
+ "Bash(gh pr checks:*)",
+ "Bash(gh pr view:*)",
+ "Bash(gh run view:*)",
+ "Bash(git status:*)"
+ ],
+ "deny": []
+ },
+ "enableAllProjectMcpServers": true,
+ "includeCoAuthoredBy": true,
+ "enabledMcpjsonServers": ["sentry"]
+}
diff --git a/.cursor/mcp.json b/.cursor/mcp.json
new file mode 100644
index 000000000..2f31691bc
--- /dev/null
+++ b/.cursor/mcp.json
@@ -0,0 +1,8 @@
+{
+ "mcpServers": {
+ "sentry": {
+ "type": "http",
+ "url": "https://mcp.sentry.dev/mcp/sentry/mcp-server"
+ }
+ }
+}
diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json
new file mode 100644
index 000000000..39bbd2681
--- /dev/null
+++ b/.devcontainer/devcontainer.json
@@ -0,0 +1,4 @@
+{
+ "image": "mcr.microsoft.com/devcontainers/universal:2",
+ "features": {}
+}
diff --git a/.env.example b/.env.example
new file mode 100644
index 000000000..aba203430
--- /dev/null
+++ b/.env.example
@@ -0,0 +1,24 @@
+# Root Environment Configuration
+# This file provides default environment variables for all packages.
+# Individual packages can override these values with their own .env files.
+
+# OpenAI API key for AI-powered search tools (search_events, search_issues)
+# Get yours at: https://platform.openai.com/api-keys
+# Required for natural language query translation features
+OPENAI_API_KEY=sk-proj-agenerate-this
+
+# For mcp-test-client: Anthropic API key for Claude access
+# ANTHROPIC_API_KEY=your_anthropic_api_key
+
+# For mcp-test-client: Sentry access token (for stdio transport)
+# Get one from: https://sentry.io/settings/account/api/auth-tokens/
+# SENTRY_ACCESS_TOKEN=your_sentry_access_token
+
+# Sentry Spotlight - development environment tool for local debugging
+# Set to 1 to enable Spotlight integration (recommended for development)
+# Learn more: https://spotlightjs.com
+SENTRY_SPOTLIGHT=1
+
+# IMPORTANT: For local development, you also need to create:
+# - packages/mcp-cloudflare/.env - OAuth configuration (required for authentication)
+# Copy packages/mcp-cloudflare/.env.example to .env and fill in your OAuth credentials
diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml
new file mode 100644
index 000000000..5222b10bd
--- /dev/null
+++ b/.github/workflows/deploy.yml
@@ -0,0 +1,147 @@
+name: Deploy to Cloudflare
+
+permissions:
+ contents: read
+ deployments: write
+ checks: write
+
+on:
+ workflow_run:
+ workflows: ["Test"]
+ types:
+ - completed
+ branches: [main]
+ workflow_dispatch:
+
+jobs:
+ deploy:
+ name: Deploy to Cloudflare
+ runs-on: ubuntu-latest
+ if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Node.js
+ uses: actions/setup-node@v4
+ with:
+ node-version: "20"
+
+ # pnpm/action-setup@v4
+ - uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda
+ name: Install pnpm
+ with:
+ run_install: false
+
+ - name: Get pnpm store directory
+ shell: bash
+ run: |
+ echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
+
+ - uses: actions/cache@v4
+ name: Setup pnpm cache
+ with:
+ path: ${{ env.STORE_PATH }}
+ key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
+ restore-keys: |
+ ${{ runner.os }}-pnpm-store-
+
+ - name: Install dependencies
+ run: pnpm install
+
+ # === BUILD AND DEPLOY CANARY WORKER ===
+ - name: Build
+ working-directory: packages/mcp-cloudflare
+ run: pnpm build
+ env:
+ SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
+
+ - name: Deploy to Canary Worker
+ id: deploy_canary
+ uses: cloudflare/wrangler-action@v3
+ with:
+ apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
+ accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
+ workingDirectory: packages/mcp-cloudflare
+ command: deploy --config wrangler.canary.jsonc
+ packageManager: pnpm
+
+ - name: Wait for Canary to Propagate
+ if: success()
+ run: |
+ echo "Waiting 30 seconds for canary deployment to propagate..."
+ sleep 30
+
+ # === SMOKE TEST CANARY ===
+ - name: Run Smoke Tests on Canary
+ id: canary_smoke_tests
+ if: success()
+ env:
+ PREVIEW_URL: https://sentry-mcp-canary.getsentry.workers.dev
+ run: |
+ echo "Running smoke tests against canary worker..."
+ cd packages/smoke-tests
+ pnpm test:ci
+
+ - name: Publish Canary Smoke Test Report
+ uses: mikepenz/action-junit-report@cf701569b05ccdd861a76b8607a66d76f6fd4857
+ if: always() && steps.canary_smoke_tests.outcome != 'skipped'
+ with:
+ report_paths: "packages/smoke-tests/tests.junit.xml"
+ check_name: "Canary Smoke Test Results"
+ fail_on_failure: false
+
+ # === DEPLOY PRODUCTION WORKER (only if canary tests pass) ===
+ - name: Deploy to Production Worker
+ id: deploy_production
+ if: steps.canary_smoke_tests.outcome == 'success'
+ uses: cloudflare/wrangler-action@v3
+ with:
+ apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
+ accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
+ workingDirectory: packages/mcp-cloudflare
+ command: deploy
+ packageManager: pnpm
+
+ - name: Wait for Production to Propagate
+ if: steps.deploy_production.outcome == 'success'
+ run: |
+ echo "Waiting 30 seconds for production deployment to propagate..."
+ sleep 30
+
+ # === SMOKE TEST PRODUCTION ===
+ - name: Run Smoke Tests on Production
+ id: production_smoke_tests
+ if: steps.deploy_production.outcome == 'success'
+ env:
+ PREVIEW_URL: https://mcp.sentry.dev
+ run: |
+ echo "Running smoke tests on production..."
+ cd packages/smoke-tests
+ pnpm test:ci
+
+ - name: Publish Production Smoke Test Report
+ uses: mikepenz/action-junit-report@cf701569b05ccdd861a76b8607a66d76f6fd4857
+ if: always() && steps.production_smoke_tests.outcome != 'skipped'
+ with:
+ report_paths: "packages/smoke-tests/tests.junit.xml"
+ check_name: "Production Smoke Test Results"
+ fail_on_failure: false
+
+ # === ROLLBACK IF PRODUCTION SMOKE TESTS FAIL ===
+ - name: Rollback Production on Smoke Test Failure
+ if: steps.production_smoke_tests.outcome == 'failure'
+ uses: cloudflare/wrangler-action@v3
+ with:
+ apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
+ accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
+ workingDirectory: packages/mcp-cloudflare
+ command: rollback
+ packageManager: pnpm
+ continue-on-error: true
+
+ - name: Fail Job if Production Smoke Tests Failed
+ if: steps.production_smoke_tests.outcome == 'failure'
+ run: |
+ echo "Production smoke tests failed - job failed after rollback"
+ exit 1
diff --git a/.github/workflows/eval.yml b/.github/workflows/eval.yml
index 3d050cdab..292a9438e 100644
--- a/.github/workflows/eval.yml
+++ b/.github/workflows/eval.yml
@@ -63,7 +63,6 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
- directory: ./coverage/
flags: evals
name: codecov-evals
fail_ci_if_error: false
diff --git a/.github/workflows/smoke-tests.yml b/.github/workflows/smoke-tests.yml
new file mode 100644
index 000000000..fce6062c7
--- /dev/null
+++ b/.github/workflows/smoke-tests.yml
@@ -0,0 +1,139 @@
+name: Smoke Tests (Local)
+
+permissions:
+ contents: read
+ checks: write
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+
+jobs:
+ smoke-tests:
+ name: Run Smoke Tests Against Local Server
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Node.js
+ uses: actions/setup-node@v4
+ with:
+ node-version: "20"
+
+ # pnpm/action-setup@v4
+ - uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda
+ name: Install pnpm
+ with:
+ run_install: false
+
+ - name: Get pnpm store directory
+ shell: bash
+ run: |
+ echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
+
+ - uses: actions/cache@v4
+ name: Setup pnpm cache
+ with:
+ path: ${{ env.STORE_PATH }}
+ key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
+ restore-keys: |
+ ${{ runner.os }}-pnpm-store-
+
+ - name: Install dependencies
+ run: pnpm install --no-frozen-lockfile
+
+ - name: Build
+ run: pnpm build
+
+ - name: Start local dev server
+ working-directory: packages/mcp-cloudflare
+ run: |
+ # Start wrangler in background and capture output
+ pnpm exec wrangler dev --port 8788 --local > wrangler.log 2>&1 &
+ WRANGLER_PID=$!
+ echo "WRANGLER_PID=$WRANGLER_PID" >> $GITHUB_ENV
+ echo "Waiting for server to start (PID: $WRANGLER_PID)..."
+
+ # Wait for server to be ready (up to 2 minutes)
+ MAX_ATTEMPTS=24
+ ATTEMPT=0
+ while [ $ATTEMPT -lt $MAX_ATTEMPTS ]; do
+ # Check if wrangler process is still running
+ if ! kill -0 $WRANGLER_PID 2>/dev/null; then
+ echo "β Wrangler process died unexpectedly!"
+ echo "π Last 50 lines of wrangler.log:"
+ tail -50 wrangler.log
+ exit 1
+ fi
+
+ if curl -s -f -o /dev/null http://localhost:8788/; then
+ echo "β Server is ready!"
+ echo "π Wrangler startup log:"
+ cat wrangler.log
+ break
+ else
+ echo "β³ Waiting for server to start (attempt $((ATTEMPT+1))/$MAX_ATTEMPTS)..."
+ # Show partial log every 5 attempts
+ if [ $((ATTEMPT % 5)) -eq 0 ] && [ $ATTEMPT -gt 0 ]; then
+ echo "π Current wrangler.log output:"
+ tail -20 wrangler.log
+ fi
+ fi
+
+ ATTEMPT=$((ATTEMPT+1))
+ sleep 5
+ done
+
+ if [ $ATTEMPT -eq $MAX_ATTEMPTS ]; then
+ echo "β Server failed to start after $MAX_ATTEMPTS attempts"
+ echo "π Full wrangler.log:"
+ cat wrangler.log
+ exit 1
+ fi
+
+ - name: Run smoke tests against local server
+ env:
+ PREVIEW_URL: http://localhost:8788
+ working-directory: packages/smoke-tests
+ run: |
+ echo "π§ͺ Running smoke tests against local server at $PREVIEW_URL"
+
+ # Give server a bit more time to stabilize after startup
+ echo "β³ Waiting 5 seconds for server to stabilize..."
+ sleep 5
+
+ # Verify server is still responding before running tests
+ if ! curl -s -f -o /dev/null http://localhost:8788/; then
+ echo "β Server is not responding before tests!"
+ echo "π Wrangler log:"
+ cat ../mcp-cloudflare/wrangler.log
+ exit 1
+ fi
+
+ echo "β Server is responding, running tests..."
+ pnpm test:ci || TEST_EXIT_CODE=$?
+
+ # If tests failed, show server logs for debugging
+ if [ "${TEST_EXIT_CODE:-0}" -ne 0 ]; then
+ echo "β Tests failed with exit code ${TEST_EXIT_CODE}"
+ echo "π Wrangler log at time of failure:"
+ cat ../mcp-cloudflare/wrangler.log
+ exit ${TEST_EXIT_CODE}
+ fi
+
+ - name: Publish Smoke Test Report
+ uses: mikepenz/action-junit-report@cf701569b05ccdd861a76b8607a66d76f6fd4857
+ if: always()
+ with:
+ report_paths: "packages/smoke-tests/tests.junit.xml"
+ check_name: "Local Smoke Test Results"
+ fail_on_failure: true
+
+ - name: Stop local server
+ if: always()
+ run: |
+ if [ ! -z "$WRANGLER_PID" ]; then
+ kill $WRANGLER_PID || true
+ fi
\ No newline at end of file
diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index c2b52c0e1..6c6797160 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -37,7 +37,7 @@ jobs:
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
-
+
- name: Run build
run: pnpm build
@@ -52,7 +52,6 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
- directory: ./coverage/
flags: unittests
name: codecov-unittests
fail_ci_if_error: false
diff --git a/.github/workflows/token-cost.yml b/.github/workflows/token-cost.yml
new file mode 100644
index 000000000..d85802b74
--- /dev/null
+++ b/.github/workflows/token-cost.yml
@@ -0,0 +1,262 @@
+name: Token Cost
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+
+permissions:
+ contents: read
+ pull-requests: write
+ checks: write
+
+jobs:
+ measure-tokens:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Node.js
+ uses: actions/setup-node@v4
+ with:
+ node-version: "20"
+
+ # pnpm/action-setup@v4
+ - uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda
+ name: Install pnpm
+ with:
+ run_install: false
+
+ - name: Get pnpm store directory
+ shell: bash
+ run: |
+ echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
+
+ - uses: actions/cache@v4
+ name: Setup pnpm cache
+ with:
+ path: ${{ env.STORE_PATH }}
+ key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
+ restore-keys: |
+ ${{ runner.os }}-pnpm-store-
+
+ - name: Install dependencies
+ run: pnpm install --no-frozen-lockfile
+
+ - name: Build tool definitions
+ run: pnpm -w run build
+
+ - name: Measure token cost
+ id: measure
+ working-directory: packages/mcp-server
+ run: |
+ # Run token counter with JSON output to file
+ pnpm run measure-tokens -- -o token-stats.json
+
+ # Extract key metrics from JSON for GitHub outputs
+ TOTAL_TOKENS=$(jq -r '.total_tokens' token-stats.json)
+ TOOL_COUNT=$(jq -r '.tool_count' token-stats.json)
+ AVG_TOKENS=$(jq -r '.avg_tokens_per_tool' token-stats.json)
+
+ # Save for later steps
+ echo "total_tokens=$TOTAL_TOKENS" >> $GITHUB_OUTPUT
+ echo "tool_count=$TOOL_COUNT" >> $GITHUB_OUTPUT
+ echo "avg_tokens=$AVG_TOKENS" >> $GITHUB_OUTPUT
+
+ - name: Generate detailed report
+ id: report
+ working-directory: packages/mcp-server
+ run: |
+ # Build markdown table from JSON
+ cat > token-report.md <> token-report.md
+
+ cat >> token-report.md <> $GITHUB_STEP_SUMMARY
+
+ - name: Download main branch token stats
+ if: github.event_name == 'pull_request'
+ uses: dawidd6/action-download-artifact@v6
+ continue-on-error: true
+ with:
+ workflow: token-cost.yml
+ branch: main
+ name_is_regexp: true
+ name: 'token-stats-.*'
+ path: main-stats
+ search_artifacts: true
+
+ - name: Compare with main branch
+ id: compare
+ if: github.event_name == 'pull_request'
+ working-directory: packages/mcp-server
+ run: |
+ # Check if we got main's stats
+ if [ -f ../../main-stats/token-stats.json ]; then
+ MAIN_TOKENS=$(jq -r '.total_tokens' ../../main-stats/token-stats.json)
+ CURRENT_TOKENS=${{ steps.measure.outputs.total_tokens }}
+ DELTA=$((CURRENT_TOKENS - MAIN_TOKENS))
+
+ echo "has_comparison=true" >> $GITHUB_OUTPUT
+ echo "main_tokens=$MAIN_TOKENS" >> $GITHUB_OUTPUT
+ echo "delta=$DELTA" >> $GITHUB_OUTPUT
+
+ if [ $DELTA -gt 0 ]; then
+ echo "delta_direction=increased" >> $GITHUB_OUTPUT
+ echo "delta_symbol=π" >> $GITHUB_OUTPUT
+ elif [ $DELTA -lt 0 ]; then
+ echo "delta_direction=decreased" >> $GITHUB_OUTPUT
+ echo "delta_symbol=π" >> $GITHUB_OUTPUT
+ else
+ echo "delta_direction=unchanged" >> $GITHUB_OUTPUT
+ echo "delta_symbol=β‘οΈ" >> $GITHUB_OUTPUT
+ fi
+ else
+ echo "has_comparison=false" >> $GITHUB_OUTPUT
+ fi
+
+ - name: Add comparison to report
+ if: github.event_name == 'pull_request' && steps.compare.outputs.has_comparison == 'true'
+ working-directory: packages/mcp-server
+ run: |
+ # Add comparison section at the top of the report
+ cat > comparison-header.md <> $GITHUB_STEP_SUMMARY
+
+ - name: Create check run
+ uses: actions/github-script@v7
+ if: always()
+ with:
+ script: |
+ const measureSucceeded = '${{ steps.measure.outcome }}' === 'success';
+ const totalTokens = '${{ steps.measure.outputs.total_tokens }}';
+ const toolCount = '${{ steps.measure.outputs.tool_count }}';
+ const avgTokens = '${{ steps.measure.outputs.avg_tokens }}';
+ const hasComparison = '${{ steps.compare.outputs.has_comparison }}' === 'true';
+ const delta = '${{ steps.compare.outputs.delta }}';
+ const deltaSymbol = '${{ steps.compare.outputs.delta_symbol }}' || 'π';
+
+ let title;
+ let summary;
+ let conclusion;
+
+ if (measureSucceeded && totalTokens && toolCount && avgTokens) {
+ // Build title
+ title = `${totalTokens} tokens (${toolCount} tools, avg ${avgTokens})`;
+
+ // Build summary
+ summary = `**Total Tokens:** ${totalTokens}\n`;
+ summary += `**Tool Count:** ${toolCount}\n`;
+ summary += `**Average:** ${avgTokens} tokens/tool\n`;
+
+ if (hasComparison && delta) {
+ const deltaPrefix = parseInt(delta) >= 0 ? '+' : '';
+ title += ` ${deltaSymbol}`;
+ summary += `\n**Change from main:** ${deltaPrefix}${delta} tokens ${deltaSymbol}`;
+ }
+
+ conclusion = 'success';
+ } else {
+ // Measurement step failed or outputs are missing
+ title = 'Token cost measurement failed';
+ summary = 'The token cost measurement step failed. Check the workflow logs for details.';
+ conclusion = 'failure';
+ }
+
+ await github.rest.checks.create({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ name: 'Token Cost',
+ head_sha: context.sha,
+ status: 'completed',
+ conclusion: conclusion,
+ output: {
+ title: title,
+ summary: summary,
+ text: measureSucceeded ? 'See job summary for detailed per-tool breakdown' : 'Token measurement failed - check workflow logs'
+ },
+ details_url: `https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`
+ });
+
+ - name: Comment on PR if token count changed
+ uses: actions/github-script@v7
+ if: github.event_name == 'pull_request' && steps.compare.outputs.has_comparison == 'true' && steps.compare.outputs.delta != '0'
+ with:
+ script: |
+ const fs = require('fs');
+ const report = fs.readFileSync('packages/mcp-server/token-report.md', 'utf8');
+
+ // Find existing comment
+ const comments = await github.rest.issues.listComments({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: context.issue.number,
+ });
+
+ const botComment = comments.data.find(comment =>
+ comment.user.type === 'Bot' &&
+ comment.body.includes('π MCP Server Token Cost Report')
+ );
+
+ // Update or create comment
+ if (botComment) {
+ await github.rest.issues.updateComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ comment_id: botComment.id,
+ body: report
+ });
+ } else {
+ await github.rest.issues.createComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: context.issue.number,
+ body: report
+ });
+ }
+
+ - name: Upload token stats artifact
+ uses: actions/upload-artifact@v4
+ if: always()
+ with:
+ name: token-stats-${{ github.sha }}
+ path: packages/mcp-server/token-stats.json
+ retention-days: 90
diff --git a/.gitignore b/.gitignore
index 4ef377b4e..520ba095c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -53,6 +53,7 @@ dist
.dev.vars
.wrangler/
+wrangler.log
*.junit.xml
coverage
@@ -60,3 +61,7 @@ coverage
# Sentry Config File
.env.sentry-build-plugin
+
+# Generated files
+packages/mcp-server/src/toolDefinitions.json
+packages/mcp-server/src/skillDefinitions.json
diff --git a/.mcp.json b/.mcp.json
new file mode 100644
index 000000000..d0d6951b5
--- /dev/null
+++ b/.mcp.json
@@ -0,0 +1,18 @@
+{
+ "mcpServers": {
+ "sentry": {
+ "type": "http",
+ "url": "https://mcp.sentry.dev/mcp/sentry/mcp-server?agent=1"
+ },
+ "sentry-dev": {
+ "type": "http",
+ "url": "http://localhost:5173/mcp/sentry/mcp-server"
+ },
+ "sentry-spotlight": {
+ "type": "stdio",
+ "command": "npx",
+ "args": ["-y", "@spotlightjs/spotlight", "--stdio-mcp"],
+ "env": {}
+ }
+ }
+}
diff --git a/.vscode/mcp.json b/.vscode/mcp.json
index ef73c09bb..a8372ae22 100644
--- a/.vscode/mcp.json
+++ b/.vscode/mcp.json
@@ -1,14 +1,7 @@
{
"servers": {
- "sentry-dev": {
- "type": "stdio",
- "command": "npx",
- "args": ["-y", "mcp-remote@latest", "http://localhost:5173/sse"]
- },
"sentry": {
- "type": "stdio",
- "command": "npx",
- "args": ["-y", "mcp-remote@latest", "https://mcp.sentry.dev/sse"]
+ "url": "https://mcp.sentry.dev/mcp"
}
}
}
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 000000000..8357e31c2
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,135 @@
+# CLAUDE.md
+
+## π΄ CRITICAL Requirements
+
+**MANDATORY before ANY code:**
+1. TypeScript: NEVER use `any`. Use `unknown` or proper types
+2. Security: NO API keys in logs. NO vulnerabilities
+3. Validation: `pnpm run tsc && pnpm run lint && pnpm run test`
+4. Tools limit: β€20 (hard limit: 25)
+
+**MANDATORY reads:**
+- Start here: CLAUDE.md β Contributor doc map
+- Tools β @docs/adding-tools.mdc
+- Testing β @docs/testing.mdc
+- PRs β @docs/pr-management.mdc
+
+## π‘ MANDATORY Workflow
+
+```bash
+# BEFORE coding (parallel execution)
+cat docs/[component].mdc & ls -la neighboring-files & git status
+
+# AFTER coding (sequential - fail fast)
+pnpm run tsc && pnpm run lint && pnpm run test # ALL must pass
+```
+
+## Repository Map
+
+```
+sentry-mcp/
+βββ packages/
+β βββ mcp-server/ # Main MCP server
+β β βββ src/
+β β β βββ tools/ # 19 tool modules
+β β β βββ server.ts # MCP protocol
+β β β βββ api-client/ # Sentry API
+β β β βββ internal/ # Shared utils
+β β βββ scripts/ # Build scripts
+β βββ mcp-cloudflare/ # Web app
+β βββ mcp-server-evals/ # AI tests
+β βββ mcp-server-mocks/ # MSW mocks
+β βββ mcp-test-client/ # Test client
+βββ docs/ # All docs
+```
+
+## AI-Powered Search Tools
+
+**search_events** (`packages/mcp-server/src/tools/search-events/`):
+- Natural language β DiscoverQL queries
+- GPT-4o agent with structured outputs
+- Tools: `datasetAttributes`, `otelSemantics`, `whoami`
+- Requires: `OPENAI_API_KEY`
+
+**search_issues** (`packages/mcp-server/src/tools/search-issues/`):
+- Natural language β issue search syntax
+- GPT-4o agent with structured outputs
+- Tools: `issueFields`, `whoami`
+- Requires: `OPENAI_API_KEY`
+
+## π’ Key Commands
+
+```bash
+# Development
+pnpm run dev # Start development
+pnpm run build # Build all packages
+pnpm run generate-otel-namespaces # Update OpenTelemetry docs
+
+# Manual Testing (preferred for testing MCP changes)
+pnpm -w run cli "who am I?" # Test with local dev server (default)
+pnpm -w run cli --agent "who am I?" # Test agent mode (use_sentry tool) - approximately 2x slower
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query" # Test against production
+pnpm -w run cli --access-token=TOKEN "query" # Test with local stdio mode
+
+# Quality checks (combine for speed)
+pnpm run tsc && pnpm run lint && pnpm run test
+
+# Token cost monitoring
+pnpm run measure-tokens # Check tool definition overhead
+
+# Common workflows
+pnpm run build && pnpm run test # Before PR
+grep -r "TODO\|FIXME" src/ # Find tech debt
+```
+
+## Quick Reference
+
+**Defaults:**
+- Organization: `sentry`
+- Project: `mcp-server`
+- Transport: stdio
+- Auth: access tokens (NOT OAuth)
+
+**Doc Index:**
+
+- Core Guidelines
+ - @docs/coding-guidelines.mdc β Code standards and patterns
+ - @docs/common-patterns.mdc β Reusable patterns and conventions
+ - @docs/quality-checks.mdc β Required checks before changes
+ - @docs/error-handling.mdc β Error handling patterns
+
+- API and Tools
+ - @docs/adding-tools.mdc β Add new MCP tools
+ - @docs/api-patterns.mdc β Sentry API usage
+ - @docs/search-events-api-patterns.md β search_events specifics
+
+- Infrastructure and Operations
+ - @docs/architecture.mdc β System design
+ - @docs/releases/cloudflare.mdc β Cloudflare Workers release
+ - @docs/releases/stdio.mdc β npm package release
+ - @docs/monitoring.mdc β Monitoring/telemetry
+ - @docs/security.mdc β Security and authentication
+ - @docs/token-cost-tracking.mdc β Track MCP tool definition overhead
+ - @docs/cursor.mdc β Cursor IDE integration
+
+- Testing
+ - @docs/testing.mdc β Testing strategies and patterns
+ - @docs/testing-stdio.md β Stdio testing playbook (build, run, test)
+ - @docs/testing-remote.md β Remote testing playbook (OAuth, web UI, CLI)
+
+- LLM-Specific
+ - @docs/llms/documentation-style-guide.mdc β How to write LLM docs
+ - @docs/llms/document-scopes.mdc β Doc scopes and purposes
+
+## Rules
+
+1. **Code**: Follow existing patterns. Check adjacent files
+2. **Errors**: Try/catch all async. Log: `console.error('[ERROR]', error.message, error.stack)`
+ - Sentry API 429: Retry with exponential backoff
+ - Sentry API 401/403: Check token permissions
+3. **Docs**: Update when changing functionality
+4. **PR**: Follow `docs/pr-management.mdc` for commit/PR guidelines (includes AI attribution)
+5. **Tasks**: Use TodoWrite for 3+ steps. Batch tool calls when possible
+
+---
+*Optimized for Codex CLI (OpenAI) and Claude Code*
diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 120000
index 000000000..47dc3e3d8
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1 @@
+AGENTS.md
\ No newline at end of file
diff --git a/Dockerfile b/Dockerfile
deleted file mode 100644
index a22c26336..000000000
--- a/Dockerfile
+++ /dev/null
@@ -1,22 +0,0 @@
-FROM node:lts-alpine AS base
-ENV PNPM_HOME="/pnpm"
-ENV PATH="$PNPM_HOME:$PATH"
-RUN corepack enable
-
-FROM base AS build
-COPY . /app
-WORKDIR /app
-# ensure latest corepack otherwise we could hit cert issues
-RUN npm i -g corepack@latest
-RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
- pnpm install --frozen-lockfile
-RUN pnpm run -r build
-RUN pnpm deploy --filter=sentry-mcp --prod /prod/sentry-mcp
-
-FROM base AS app
-COPY --from=build /prod/sentry-mcp /app
-WORKDIR /app
-# Expose port if needed (though stdio doesn't need it, but may be used in dev)
-EXPOSE 8788
-# Run the MCP server via stdio transport
-CMD ["npm", "run", "start", "--"]
diff --git a/Makefile b/Makefile
new file mode 100644
index 000000000..3eef35c12
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,73 @@
+.PHONY: setup-env clean
+
+# Clean all generated files, caches, and dependencies
+clean:
+ @echo "π§Ή Cleaning build outputs, caches, and dependencies..."
+ @echo ""
+
+ @# Remove all node_modules directories
+ @echo "Removing node_modules..."
+ @find . -name "node_modules" -type d -prune -exec rm -rf {} + 2>/dev/null || true
+
+ @# Remove all dist directories (build outputs)
+ @echo "Removing dist directories..."
+ @find . -name "dist" -type d -prune -exec rm -rf {} + 2>/dev/null || true
+
+ @# Remove all .turbo directories (turbo cache)
+ @echo "Removing .turbo cache directories..."
+ @find . -name ".turbo" -type d -prune -exec rm -rf {} + 2>/dev/null || true
+
+ @# Remove coverage directories
+ @echo "Removing coverage directories..."
+ @find . -name "coverage" -type d -prune -exec rm -rf {} + 2>/dev/null || true
+
+ @# Remove Cloudflare wrangler cache
+ @echo "Removing .wrangler cache directories..."
+ @find . -name ".wrangler" -type d -prune -exec rm -rf {} + 2>/dev/null || true
+
+ @# Remove pnpm store (optional - uncomment if you want to clean the global pnpm cache too)
+ @# pnpm store prune
+
+ @echo ""
+ @echo "β Clean complete!"
+ @echo ""
+ @echo "To reinstall dependencies, run: pnpm install"
+
+# Set up environment files for local development
+setup-env:
+ @echo "Setting up environment files for local development..."
+ @echo ""
+
+ @# Create root .env if it doesn't exist
+ @if [ ! -f .env ]; then \
+ echo "Creating root .env from .env.example..."; \
+ cp .env.example .env; \
+ echo "β Created .env in project root"; \
+ echo ""; \
+ echo "β οΈ Please edit .env and add your OPENAI_API_KEY"; \
+ else \
+ echo "β Root .env already exists"; \
+ fi
+ @echo ""
+
+ @# Create cloudflare .env if it doesn't exist
+ @if [ ! -f packages/mcp-cloudflare/.env ]; then \
+ echo "Creating packages/mcp-cloudflare/.env from .env.example..."; \
+ cp packages/mcp-cloudflare/.env.example packages/mcp-cloudflare/.env; \
+ echo "β Created packages/mcp-cloudflare/.env"; \
+ echo ""; \
+ echo "β οΈ Please edit packages/mcp-cloudflare/.env and add:"; \
+ echo " - SENTRY_CLIENT_ID"; \
+ echo " - SENTRY_CLIENT_SECRET"; \
+ echo " - COOKIE_SECRET"; \
+ echo ""; \
+ echo "π See README.md for instructions on creating a Sentry OAuth App"; \
+ else \
+ echo "β packages/mcp-cloudflare/.env already exists"; \
+ fi
+ @echo ""
+ @echo "π Environment setup complete!"
+ @echo ""
+ @echo "Next steps:"
+ @echo "1. Edit the .env files with your credentials"
+ @echo "2. Run 'pnpm dev' to start the development server"
\ No newline at end of file
diff --git a/README.md b/README.md
index c8863e40a..998e5565e 100644
--- a/README.md
+++ b/README.md
@@ -1,11 +1,10 @@
# sentry-mcp
[](https://codecov.io/gh/getsentry/sentry-mcp)
-[](https://smithery.ai/server/@getsentry/sentry-mcp)
-This is a prototype of a remote MCP sever, acting as a middleware to the upstream Sentry API provider.
+Sentry's MCP service is primarily designed for human-in-the-loop coding agents. Our tool selection and priorities are focused on developer workflows and debugging use cases, rather than providing a general-purpose MCP server for all Sentry functionality.
-It is based on [Cloudflare's work towards remote MCPs](https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/).
+This remote MCP server acts as middleware to the upstream Sentry API, optimized for coding assistants like Cursor, Claude Code, and similar development tools. It's based on [Cloudflare's work towards remote MCPs](https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/).
## Getting Started
@@ -13,13 +12,15 @@ You'll find everything you need to know by visiting the deployed service in prod
-If you're looking to contribute, learn how it works, or to run this for self-hosted Sentry, continue below..
+If you're looking to contribute, learn how it works, or to run this for self-hosted Sentry, continue below.
### Stdio vs Remote
While this repository is focused on acting as an MCP service, we also support a `stdio` transport. This is still a work in progress, but is the easiest way to adapt run the MCP against a self-hosted Sentry install.
-To utilize the `stdoio` transport, you'll need to create an Personal API Token (PAT) in Sentry with the necessary scopes. As of writing this is:
+**Note:** The AI-powered search tools (`search_events` and `search_issues`) require an OpenAI API key. These tools use natural language processing to translate queries into Sentry's query syntax. Without the API key, these specific tools will be unavailable, but all other tools will function normally.
+
+To utilize the `stdio` transport, you'll need to create an User Auth Token in Sentry with the necessary scopes. As of writing this is:
```
org:read
@@ -27,22 +28,30 @@ project:read
project:write
team:read
team:write
-event:read
+event:write
```
Launch the transport:
```shell
-npx @sentry/mcp-server@latest --access-token=sentry-pat --host=sentry.example.com
+npx @sentry/mcp-server@latest --access-token=sentry-user-token
```
+Need to connect to a self-hosted deployment? Add --host (hostname
+only, e.g. --host=sentry.example.com) when you run the command.
+
Note: You can also use environment variables:
```shell
-SENTRY_AUTH_TOKEN=
+SENTRY_ACCESS_TOKEN=
+# Optional overrides for self-hosted deployments
SENTRY_HOST=
+OPENAI_API_KEY= # Required for AI-powered search tools (search_events, search_issues)
```
+If you leave the host variable unset, the CLI automatically targets the Sentry
+SaaS service. Only set the override when you operate self-hosted Sentry.
+
### MCP Inspector
MCP includes an [Inspector](https://modelcontextprotocol.io/docs/tools/inspector), to easily test the service:
@@ -51,66 +60,112 @@ MCP includes an [Inspector](https://modelcontextprotocol.io/docs/tools/inspector
pnpm inspector
```
-Enter the MCP server URL (http://localhost:5173) and hit connect. This should trigger the authentication flow for you.
+Enter the MCP server URL () and hit connect. This should trigger the authentication flow for you.
Note: If you have issues with your OAuth flow when accessing the inspector on `127.0.0.1`, try using `localhost` instead by visiting `http://localhost:6274`.
## Local Development
-If you'd like to iterate and test your MCP server, you can do so in local development. This will require you to create another OAuth App in Sentry (Settings => API => [Applications](https://sentry.io/settings/account/api/applications/)):
+To contribute changes, you'll need to set up your local environment:
-- For the Homepage URL, specify `http://localhost:8788`
-- For the Authorized Redirect URIs, specify `http://localhost:8788/callback`
-- Note your Client ID and generate a Client secret.
-- Create a `.dev.vars` file in your project root with:
+1. **Set up environment files:**
-```shell
-SENTRY_CLIENT_ID=your_development_sentry_client_id
-SENTRY_CLIENT_SECRET=your_development_sentry_client_secret
-```
+ ```shell
+ make setup-env # Creates both .env files from examples
+ ```
+
+2. **Create an OAuth App in Sentry** (Settings => API => [Applications](https://sentry.io/settings/account/api/applications/)):
+
+ - Homepage URL: `http://localhost:5173`
+ - Authorized Redirect URIs: `http://localhost:5173/oauth/callback`
+ - Note your Client ID and generate a Client secret
+
+3. **Configure your credentials:**
+
+ - Edit `.env` in the root directory and add your `OPENAI_API_KEY`
+ - Edit `packages/mcp-cloudflare/.env` and add:
+ - `SENTRY_CLIENT_ID=your_development_sentry_client_id`
+ - `SENTRY_CLIENT_SECRET=your_development_sentry_client_secret`
+ - `COOKIE_SECRET=my-super-secret-cookie`
+
+4. **Start the development server:**
+
+ ```shell
+ pnpm dev
+ ```
### Verify
-Run the server locally to make it available at `http://localhost:8788`
+Run the server locally to make it available at `http://localhost:5173`
```shell
pnpm dev
```
-To test the local server, enter `http://localhost:8788/sse` into Inspector and hit connect. Once you follow the prompts, you'll be able to "List Tools".
+To test the local server, enter `http://localhost:5173/mcp` into Inspector and hit connect. Once you follow the prompts, you'll be able to "List Tools".
### Tests
-There are two test suites included: basic unit tests, and some evaluations.
+There are three test suites included: unit tests, evaluations, and manual testing.
-Unit tests can be run using:
+**Unit tests** can be run using:
```shell
pnpm test
```
-Evals will require a `.env` file with some config:
+**Evaluations** require a `.env` file in the project root with some config:
```shell
-OPENAI_API_KEY=
+# .env (in project root)
+OPENAI_API_KEY= # Also required for AI-powered search tools in production
```
-Once thats done you can run them using:
+Note: The root `.env` file provides defaults for all packages. Individual packages can have their own `.env` files to override these defaults during development.
+
+Once that's done you can run them using:
```shell
-pnpm test
+pnpm eval
```
-## Notes
+**Manual testing** (preferred for testing MCP changes):
+
+```shell
+# Test with local dev server (default: http://localhost:5173)
+pnpm -w run cli "who am I?"
+
+# Test agent mode (use_sentry tool only)
+pnpm -w run cli --agent "who am I?"
+
+# Test against production
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query"
+
+# Test with local stdio mode (requires SENTRY_ACCESS_TOKEN)
+pnpm -w run cli --access-token=TOKEN "query"
+```
+
+Note: The CLI defaults to `http://localhost:5173`. Override with `--mcp-host` or set `MCP_URL` environment variable.
+
+**Comprehensive testing playbooks:**
+- **Stdio testing:** See `docs/testing-stdio.md` for complete guide on building, running, and testing the stdio implementation (IDEs, MCP Inspector)
+- **Remote testing:** See `docs/testing-remote.md` for complete guide on testing the remote server (OAuth, web UI, CLI client)
+
+## Development Notes
+
+### Automated Code Review
-### Using Claude and other MCP Clients
+This repository uses automated code review tools (like Cursor BugBot) to help identify potential issues in pull requests. These tools provide helpful feedback and suggestions, but **we do not recommend making these checks required** as the accuracy is still evolving and can produce false positives.
-When using Claude to connect to your remote MCP server, you may see some error messages. This is because Claude Desktop doesn't yet support remote MCP servers, so it sometimes gets confused. To verify whether the MCP server is connected, hover over the π¨ icon in the bottom right corner of Claude's interface. You should see your tools available there.
+The automated reviews should be treated as:
-### Using Cursor and other MCP Clients
+- β **Helpful suggestions** to consider during code review
+- β **Starting points** for discussion and improvement
+- β **Not blocking requirements** for merging PRs
+- β **Not replacements** for human code review
-To connect Cursor with your MCP server, choose `Type`: "Command" and in the `Command` field, combine the command and args fields into one (e.g. `npx mcp-remote@latest https://..workers.dev/sse`).
+When addressing automated feedback, focus on the underlying concerns rather than strictly following every suggestion.
-Note that while Cursor supports HTTP+SSE servers, it doesn't support authentication, so you still need to use `mcp-remote` (and to use a STDIO server, not an HTTP one).
+### Contributor Documentation
-You can connect your MCP server to other MCP clients like Windsurf by opening the client's configuration file, adding the same JSON that was used for the Claude setup, and restarting the MCP client.
+Looking to contribute or explore the full documentation map? See `CLAUDE.md` (also available as `AGENTS.md`) for contributor workflows and the complete docs index. The `docs/` folder contains the per-topic guides and tool-integrated `.mdc` files.
diff --git a/benchmark-agent.sh b/benchmark-agent.sh
new file mode 100755
index 000000000..f82889986
--- /dev/null
+++ b/benchmark-agent.sh
@@ -0,0 +1,118 @@
+#!/bin/bash
+
+# Benchmark script for comparing direct vs agent mode performance
+# Usage: ./benchmark-agent.sh [iterations]
+
+ITERATIONS=${1:-10}
+QUERY="what organizations do I have access to?"
+
+echo "=========================================="
+echo "MCP Agent Performance Benchmark"
+echo "=========================================="
+echo "Query: $QUERY"
+echo "Iterations: $ITERATIONS"
+echo ""
+
+# Arrays to store timings
+declare -a direct_times
+declare -a agent_times
+
+echo "Running direct mode tests..."
+for i in $(seq 1 $ITERATIONS); do
+ echo -n " Run $i/$ITERATIONS... "
+
+ # Run and capture timing (real time in seconds)
+ START=$(date +%s.%N)
+ pnpm -w run cli "$QUERY" > /dev/null 2>&1
+ END=$(date +%s.%N)
+
+ # Calculate duration
+ DURATION=$(echo "$END - $START" | bc)
+ direct_times+=($DURATION)
+
+ echo "${DURATION}s"
+done
+
+echo ""
+echo "Running agent mode tests..."
+for i in $(seq 1 $ITERATIONS); do
+ echo -n " Run $i/$ITERATIONS... "
+
+ # Run and capture timing
+ START=$(date +%s.%N)
+ pnpm -w run cli --agent "$QUERY" > /dev/null 2>&1
+ END=$(date +%s.%N)
+
+ # Calculate duration
+ DURATION=$(echo "$END - $START" | bc)
+ agent_times+=($DURATION)
+
+ echo "${DURATION}s"
+done
+
+echo ""
+echo "=========================================="
+echo "Results"
+echo "=========================================="
+
+# Calculate statistics for direct mode
+direct_sum=0
+direct_min=${direct_times[0]}
+direct_max=${direct_times[0]}
+for time in "${direct_times[@]}"; do
+ direct_sum=$(echo "$direct_sum + $time" | bc)
+ if (( $(echo "$time < $direct_min" | bc -l) )); then
+ direct_min=$time
+ fi
+ if (( $(echo "$time > $direct_max" | bc -l) )); then
+ direct_max=$time
+ fi
+done
+direct_avg=$(echo "scale=2; $direct_sum / $ITERATIONS" | bc)
+
+# Calculate statistics for agent mode
+agent_sum=0
+agent_min=${agent_times[0]}
+agent_max=${agent_times[0]}
+for time in "${agent_times[@]}"; do
+ agent_sum=$(echo "$agent_sum + $time" | bc)
+ if (( $(echo "$time < $agent_min" | bc -l) )); then
+ agent_min=$time
+ fi
+ if (( $(echo "$time > $agent_max" | bc -l) )); then
+ agent_max=$time
+ fi
+done
+agent_avg=$(echo "scale=2; $agent_sum / $ITERATIONS" | bc)
+
+# Calculate difference
+diff=$(echo "scale=2; $agent_avg - $direct_avg" | bc)
+percent=$(echo "scale=1; ($agent_avg - $direct_avg) / $direct_avg * 100" | bc)
+
+echo ""
+echo "Direct Mode:"
+echo " Min: ${direct_min}s"
+echo " Max: ${direct_max}s"
+echo " Average: ${direct_avg}s"
+echo ""
+echo "Agent Mode:"
+echo " Min: ${agent_min}s"
+echo " Max: ${agent_max}s"
+echo " Average: ${agent_avg}s"
+echo ""
+echo "Difference:"
+if (( $(echo "$diff > 0" | bc -l) )); then
+ echo " +${diff}s (${percent}% slower)"
+elif (( $(echo "$diff < 0" | bc -l) )); then
+ abs_diff=$(echo "scale=2; -1 * $diff" | bc)
+ abs_percent=$(echo "scale=1; -1 * $percent" | bc)
+ echo " -${abs_diff}s (${abs_percent}% faster)"
+else
+ echo " No difference (0%)"
+fi
+echo ""
+
+# Show all individual results
+echo "All timings:"
+echo " Direct: ${direct_times[*]}"
+echo " Agent: ${agent_times[*]}"
diff --git a/biome.json b/biome.json
index 81b033d3b..8966062ec 100644
--- a/biome.json
+++ b/biome.json
@@ -4,7 +4,11 @@
"enabled": true
},
"files": {
- "ignore": ["worker-configuration.d.ts", "tsconfig*.json"]
+ "ignore": [
+ "worker-configuration.d.ts",
+ "tsconfig*.json",
+ "packages/mcp-server-mocks/src/fixtures/**"
+ ]
},
"vcs": {
"enabled": true,
@@ -15,6 +19,9 @@
"enabled": true,
"rules": {
"recommended": true,
+ "correctness": {
+ "noUnusedImports": "warn"
+ },
"suspicious": {
"noExplicitAny": "off",
"noDebugger": "off",
diff --git a/codecov.yml b/codecov.yml
index bfdc9877d..03f06fc72 100644
--- a/codecov.yml
+++ b/codecov.yml
@@ -6,3 +6,5 @@ coverage:
patch:
default:
informational: true
+
+comment: false
diff --git a/core b/core
new file mode 100644
index 000000000..0c77d3f22
Binary files /dev/null and b/core differ
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 000000000..101c87cd1
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,26 @@
+# Contributor Docs
+
+This directory contains contributor documentation used by humans and LLMs. To avoid duplication, the canonical documentation map and contributor workflow live in `CLAUDE.md` (also available as `AGENTS.md`).
+
+## Purpose
+
+- Central home for all contributor-focused docs (.mdc files)
+- Consumed by tools (e.g., Cursor) via direct file references
+
+## Start Here
+
+- Doc map and workflow: see `CLAUDE.md` / `AGENTS.md`
+- Per-topic guides live in this folder (e.g., `adding-tools.mdc`)
+
+## Integration with Tools
+
+- Cursor IDE: this folder is referenced directly as contextual rules
+- Other AI tools: reference specific `.mdc` files as needed
+
+## LLM-Specific
+
+- Meta-docs live under `llms/` (e.g., `llms/document-scopes.mdc`)
+
+## Maintenance
+
+Update docs when patterns change, new tools are added, or common issues arise. Keep the index in `CLAUDE.md` authoritative; avoid mirroring it here.
diff --git a/docs/adding-tools.mdc b/docs/adding-tools.mdc
new file mode 100644
index 000000000..67061f32a
--- /dev/null
+++ b/docs/adding-tools.mdc
@@ -0,0 +1,411 @@
+---
+description: Step-by-step guide for adding new tools to the Sentry MCP server.
+globs:
+alwaysApply: false
+---
+# Adding New Tools
+
+Step-by-step guide for adding new tools to the Sentry MCP server.
+
+## Tool Count Limits
+
+**IMPORTANT**: AI agents have a hard cap of 45 total tools available. Since Sentry MCP cannot consume all available tool slots:
+- **Target**: Keep total tool count around 20
+- **Maximum**: Absolutely no more than 25 tools
+- **Constraint**: This limit exists in Cursor and possibly other tools
+
+Before adding a new tool, consider if it could be:
+1. Combined with an existing tool
+2. Implemented as a parameter variant
+3. Truly necessary for core functionality
+
+## Tool Structure
+
+Each tool consists of:
+1. **Tool Module** - Single file in `src/tools/your-tool-name.ts` with definition and handler
+2. **Tests** - Unit tests in `src/tools/your-tool-name.test.ts`
+3. **Mocks** - API responses in `mcp-server-mocks`
+4. **Evals** - Integration tests (use sparingly)
+
+## Step 1: Create the Tool Module
+
+Create `packages/mcp-server/src/tools/your-tool-name.ts`:
+
+```typescript
+import { z } from "zod";
+import { defineTool } from "./utils/defineTool";
+import { apiServiceFromContext } from "./utils/api-utils";
+import type { ServerContext } from "../types";
+
+export default defineTool({
+ name: "your_tool_name",
+ description: [
+ "One-line summary.",
+ "",
+ "Use this tool when you need to:",
+ "- Specific use case 1",
+ "- Specific use case 2",
+ "",
+ "",
+ "your_tool_name(organizationSlug='my-org', param='value')",
+ "",
+ "",
+ "",
+ "- Parameter interpretation hints",
+ "",
+ ].join("\n"),
+ inputSchema: {
+ organizationSlug: z.string().describe("The organization's slug"),
+ regionUrl: z.string().optional().describe("Optional region URL"),
+ yourParam: z.string().describe("What values are expected"),
+ },
+ annotations: {
+ readOnlyHint: true,
+ openWorldHint: true,
+ },
+ async handler(params, context: ServerContext) {
+ // Implementation here
+ },
+});
+```
+
+### Safety Annotations
+
+**REQUIRED**: All tools must include safety annotations for MCP directory compliance.
+
+**Available annotations:**
+- `readOnlyHint` (boolean): Tool doesn't modify data
+- `destructiveHint` (boolean): Tool may modify/delete existing data
+- `idempotentHint` (boolean): Repeated calls with same arguments have no additional effect
+- `openWorldHint` (boolean): Tool interacts with external services (default: true for API calls)
+
+**Annotation patterns:**
+
+```typescript
+// Read-only tools (queries, lists, searches)
+annotations: {
+ readOnlyHint: true,
+ openWorldHint: true,
+}
+
+// Create tools (additive, non-destructive)
+annotations: {
+ readOnlyHint: false,
+ destructiveHint: false,
+ openWorldHint: true,
+}
+
+// Update tools (modify existing data)
+annotations: {
+ readOnlyHint: false,
+ destructiveHint: true,
+ idempotentHint: true, // Same update twice = same result
+ openWorldHint: true,
+}
+```
+
+### Writing LLM-Friendly Descriptions
+
+Critical for LLM success:
+- Start with "Use this tool when you need to:"
+- Include concrete examples
+- Reference related tools
+- Explain parameter formats in `.describe()`
+
+## Step 2: Implement the Handler
+
+Add the handler implementation to your tool module:
+
+```typescript
+async handler(params, context: ServerContext) {
+ // 1. Get API service
+ const api = apiServiceFromContext(context, {
+ regionUrl: params.regionUrl,
+ });
+
+ // 2. Validate inputs (see common-patterns.mdc#error-handling)
+ if (!params.organizationSlug) {
+ throw new UserInputError(
+ "Organization slug is required. Use find_organizations() to list."
+ );
+ }
+
+ // 3. Set monitoring tags
+ setTag("organization.slug", params.organizationSlug);
+
+ // 4. Call API
+ const data = await api.yourMethod(params);
+
+ // 5. Format response
+ let output = `# Results in **${params.organizationSlug}**\n\n`;
+
+ if (data.length === 0) {
+ return output + "No results found.\n";
+ }
+
+ // 6. Format data
+ output += formatYourData(data);
+
+ // 7. Add next steps
+ output += "\n\n# Using this information\n\n";
+ output += "- Next tool to use: `related_tool(param='...')`\n";
+
+ return output;
+}
+```
+
+### Response Formatting
+
+See `common-patterns.mdc#response-formatting` for:
+- Markdown structure
+- ID/URL formatting
+- Next steps guidance
+
+## Step 3: Add Tests
+
+Follow comprehensive testing patterns from `testing.mdc` for unit, integration, and evaluation tests.
+
+Create `packages/mcp-server/src/tools/your-tool-name.test.ts`:
+
+```typescript
+import { describe, it, expect } from "vitest";
+import yourToolName from "./your-tool-name.js";
+
+describe("your_tool_name", () => {
+ it("returns formatted output", async () => {
+ const result = await yourToolName.handler(
+ { organizationSlug: "test-org", yourParam: "value" },
+ {
+ accessToken: "test-token",
+ userId: "1",
+ organizationSlug: null,
+ }
+ );
+
+ expect(result).toMatchInlineSnapshot(`
+ "# Results in **test-org**
+
+ Expected output here"
+ `);
+ });
+});
+```
+
+**Testing Requirements:**
+- Input validation (see `testing.mdc#testing-error-cases`)
+- Error handling (use patterns from `common-patterns.mdc#error-handling`)
+- Output formatting with snapshots
+- API integration with MSW mocks
+
+**After changing output, update snapshots:**
+```bash
+cd packages/mcp-server
+pnpm vitest --run -u
+```
+
+## Step 4: Add Mocks
+
+In `packages/mcp-server-mocks/src/handlers/`:
+
+```typescript
+{
+ method: "get",
+ path: "/api/0/organizations/:org/your-endpoint/",
+ fetch: async ({ request, params }) => {
+ // Validate parameters
+ if (!params.org) {
+ return HttpResponse.json("Invalid org", { status: 400 });
+ }
+
+ // Return fixture
+ return HttpResponse.json(yourDataFixture);
+ }
+}
+```
+
+See `api-patterns.mdc#mock-patterns` for validation examples.
+
+## Step 5: Add Evaluation Tests (Sparingly)
+
+**β οΈ Each eval costs time and API credits. Only test core functionality!**
+
+```typescript
+describeEval("your-tool", {
+ data: async () => [
+ {
+ input: `Primary use case in ${FIXTURES.organizationSlug}`,
+ expected: "Expected response"
+ },
+ // Maximum 2-3 scenarios!
+ ],
+ task: TaskRunner(),
+ scorers: [Factuality()],
+ threshold: 0.6,
+});
+```
+
+## Testing Workflow
+
+```bash
+# 1. Run unit tests
+pnpm test tools.test
+
+# 2. Test with inspector
+pnpm inspector
+
+# 3. Run minimal evals
+pnpm eval your-tool
+```
+
+## Checklist
+
+- [ ] Definition in `toolDefinitions.ts`
+- [ ] Handler in `tools.ts`
+- [ ] Unit tests with snapshots
+- [ ] Mock responses
+- [ ] 1-2 eval tests (if critical)
+- [ ] Run quality checks
+
+## Agent-in-Tool Pattern
+
+Some tools (`search_events` and `search_issues`) embed AI agents to handle complex natural language translation. This pattern is used when:
+
+### When to Use This Pattern
+
+1. **Complex query translation** - Converting natural language to domain-specific query languages
+2. **Dynamic field discovery** - When available fields vary by project/context
+3. **Semantic understanding** - When the tool needs to understand intent, not just parameters
+
+### When NOT to Use This Pattern
+
+1. **Simple API calls** - Direct parameter mapping to API endpoints
+2. **Deterministic operations** - Operations with clear, unambiguous inputs
+3. **Performance-critical paths** - Embedded agents add latency and cost
+
+### Architecture
+
+```typescript
+// Tool handler delegates to embedded agent
+async handler(params, context) {
+ // 1. Embedded agent translates natural language
+ const translated = await translateQuery(params.naturalLanguageQuery, ...);
+
+ // 2. Tool executes the translated query
+ const results = await apiService.searchEvents(translated.query, ...);
+
+ // 3. Format and return results
+ return formatResults(results);
+}
+```
+
+### Error Handling Philosophy
+
+**DO NOT retry internally**. When the embedded agent fails:
+1. Throw a clear `UserInputError` with specific guidance
+2. Let the calling agent (Claude/Cursor) see the error
+3. The calling agent can retry with corrections if needed
+
+**IMPORTANT**: Keep system prompts static to enable LLM provider caching. Never modify prompts based on errors.
+
+```typescript
+// BAD: Dynamic prompt modification prevents caching
+let systemPrompt = basePrompt;
+if (previousError) {
+ systemPrompt += `\nPrevious error: ${previousError}`;
+}
+
+// GOOD: Static prompt with clear error boundaries
+const systemPrompt = STATIC_SYSTEM_PROMPT;
+try {
+ return await translateQuery(...);
+} catch (error) {
+ throw new UserInputError(`Could not translate query: ${error.message}`);
+}
+```
+
+### Tool Boundaries
+
+1. **Embedded Agent Responsibilities**:
+ - Translate natural language to structured queries
+ - Discover available fields/attributes
+ - Validate query syntax
+
+2. **Tool Handler Responsibilities**:
+ - Execute the translated query
+ - Handle API errors
+ - Format results for the calling agent
+
+3. **Calling Agent Responsibilities**:
+ - Decide when to use the tool
+ - Handle errors and retry if needed
+ - Interpret results
+
+### Implementation Guidelines
+
+1. **Create an AGENTS.md file** in the tool directory documenting:
+ - The embedded agent's prompt and behavior
+ - Common translation patterns
+ - Known limitations
+
+2. **Keep agent prompts focused** - Don't duplicate general MCP knowledge
+3. **Use structured outputs** - Define clear schemas for agent responses
+4. **Provide tool discovery** - Let agents explore available fields dynamically
+
+See `packages/mcp-server/src/tools/search-events/` and `packages/mcp-server/src/tools/search-issues/` for examples.
+
+## Worker-Specific Tools
+
+Some tools may require access to Cloudflare Worker-specific bindings (like AutoRAG, D1, R2, etc.) that aren't available in the standard ServerContext. For these cases, create a separate endpoint in the Worker that the tool can call.
+
+### Example: RAG Search Endpoint
+
+The `search_docs` tool demonstrates this pattern:
+
+1. **Worker Route** (`/api/search`): Handles the actual AutoRAG interaction
+2. **MCP Tool**: Makes HTTP requests to the Worker endpoint
+3. **Authentication**: Uses the same Bearer token for security
+
+```typescript
+// In the Worker (routes/search.ts)
+export default new Hono().post("/", async (c) => {
+ const { query, maxResults } = await c.req.json();
+
+ // Access Worker-specific bindings
+ const results = await c.env.AI.autorag("sentry-docs").aiSearch({
+ query,
+ max_num_results: maxResults,
+ });
+
+ return c.json({ results });
+});
+
+// In the MCP tool (tools.ts)
+search_docs: async (context, params) => {
+ const response = await fetch(`${context.host}/api/search`, {
+ method: "POST",
+ headers: {
+ "Authorization": `Bearer ${context.accessToken}`,
+ "Content-Type": "application/json",
+ },
+ body: JSON.stringify(params),
+ });
+
+ const data = await response.json();
+ // Format and return results
+}
+```
+
+This pattern works with both Cloudflare-hosted and stdio transports.
+
+## Common Patterns
+
+- Error handling: `common-patterns.mdc#error-handling`
+- API usage: `api-patterns.mdc`
+- Testing: `testing.mdc`
+- Response formatting: `common-patterns.mdc#response-formatting`
+
+## References
+
+- Tool examples: `packages/mcp-server/src/tools.ts`
+- Schema patterns: `packages/mcp-server/src/schema.ts`
+- Mock examples: `packages/mcp-server-mocks/src/handlers/`
diff --git a/docs/api-patterns.mdc b/docs/api-patterns.mdc
new file mode 100644
index 000000000..824a3d56e
--- /dev/null
+++ b/docs/api-patterns.mdc
@@ -0,0 +1,220 @@
+# API Patterns
+
+Sentry API client usage, mocking, and testing patterns.
+
+## API Client Usage
+
+### Client Creation
+
+```typescript
+// Standard usage with context helper
+const apiService = apiServiceFromContext(context, {
+ regionUrl: params.regionUrl
+});
+
+// Direct instantiation
+const api = new SentryApiService({
+ host: "sentry.io",
+ accessToken: token
+});
+```
+
+See: `packages/mcp-server/src/api-utils.ts` and `adding-tools.mdc#step-2-implement-the-handler` for usage in tools.
+
+### Common Operations
+
+```typescript
+// List with filtering
+const issues = await api.issues.list({
+ organizationSlug: "org",
+ query: "is:unresolved",
+ sort: "date"
+});
+
+// Get specific resource
+const project = await api.projects.get({
+ organizationSlug: "org",
+ projectIdOrSlug: "frontend"
+});
+
+// Create/update
+await api.issues.update({
+ issueId: "123",
+ status: "resolved"
+});
+```
+
+### Multi-Region Support
+
+Sentry uses region-specific URLs:
+
+```typescript
+// Auto-detect from organization
+const orgs = await api.organizations.list();
+// Returns: { region_url: "https://us.sentry.io" }
+
+// Use region URL
+const api = apiServiceFromContext(context, {
+ regionUrl: org.region_url
+});
+```
+
+## Schema Patterns
+
+### Flexible Sentry Models
+
+```typescript
+// Support ID variations
+const IssueIdSchema = z.union([
+ z.string(), // "PROJ-123"
+ z.number() // 123456789
+]);
+
+// Partial with passthrough for unknowns
+const FlexibleSchema = BaseSchema
+ .partial()
+ .passthrough();
+
+// Nullable handling
+z.union([DateSchema, z.null()])
+```
+
+See Zod patterns: `common-patterns.mdc#zod-schema-patterns`
+
+### Type Safety
+
+For testing API patterns, see `testing.mdc#mock-server-setup`
+
+```typescript
+// Derive types from schemas
+export type Issue = z.infer;
+
+// Runtime validation
+const issues = IssueListSchema.parse(response);
+```
+
+## Mock Patterns
+
+### MSW Handler Structure
+
+```typescript
+export const handlers = [
+ {
+ method: "get",
+ path: "/api/0/organizations/:org/issues/",
+ fetch: async ({ request, params }) => {
+ // Validate parameters
+ if (!params.org) {
+ return HttpResponse.json("Invalid org", { status: 400 });
+ }
+
+ // Return fixture
+ return HttpResponse.json(issueListFixture);
+ }
+ }
+];
+```
+
+See: `packages/mcp-server-mocks/src/handlers/`
+
+### Request Validation
+
+```typescript
+fetch: async ({ request }) => {
+ const url = new URL(request.url);
+ const query = url.searchParams.get("query");
+
+ // Validate query parameters
+ if (query && !isValidQuery(query)) {
+ return HttpResponse.json("Invalid query", { status: 400 });
+ }
+
+ // Filter based on query
+ const filtered = fixtures.filter(item =>
+ matchesQuery(item, query)
+ );
+
+ return HttpResponse.json(filtered);
+}
+```
+
+### Dynamic Responses
+
+```typescript
+// Support pagination
+const limit = parseInt(url.searchParams.get("limit") || "100");
+const cursor = url.searchParams.get("cursor");
+
+const start = cursor ? parseInt(cursor) : 0;
+const page = fixtures.slice(start, start + limit);
+
+return HttpResponse.json(page, {
+ headers: {
+ "Link": `<...?cursor=${start + limit}>; rel="next"`
+ }
+});
+```
+
+## Testing with Mocks
+
+### Setup Pattern
+
+```typescript
+import { setupMockServer } from "@sentry-mcp/mocks";
+
+const server = setupMockServer();
+
+beforeAll(() => server.listen());
+afterEach(() => server.resetHandlers());
+afterAll(() => server.close());
+```
+
+### Override Handlers
+
+```typescript
+it("handles errors", async () => {
+ server.use(
+ http.get("*/issues/", () =>
+ HttpResponse.json({ error: "Server error" }, { status: 500 })
+ )
+ );
+
+ await expect(api.issues.list(params))
+ .rejects.toThrow(ApiError);
+});
+```
+
+## Error Patterns
+
+### ApiError Handling
+
+```typescript
+try {
+ const data = await api.issues.list(params);
+} catch (error) {
+ if (error instanceof ApiError) {
+ // Handle specific status codes
+ if (error.status === 404) {
+ throw new UserInputError("Organization not found");
+ }
+ }
+ throw error;
+}
+```
+
+See error patterns: `common-patterns.mdc#error-handling`
+
+## Best Practices
+
+1. **Always use context helper** when in tools/prompts
+2. **Handle region URLs** for multi-region support
+3. **Validate schemas** at API boundaries
+4. **Mock realistically** in tests
+5. **Transform errors** for LLM consumption
+
+## References
+
+- API Client: `packages/mcp-server/src/api-client/`
+- Mock handlers: `packages/mcp-server-mocks/src/handlers/`
+- Fixtures: `packages/mcp-server-mocks/src/fixtures/`
+- API Utils: `packages/mcp-server/src/api-utils.ts`
\ No newline at end of file
diff --git a/docs/architecture.mdc b/docs/architecture.mdc
new file mode 100644
index 000000000..5b0be416d
--- /dev/null
+++ b/docs/architecture.mdc
@@ -0,0 +1,352 @@
+---
+description:
+globs:
+alwaysApply: false
+---
+# Architecture
+
+System design and package interactions for the Sentry MCP server.
+
+## Overview
+
+Sentry MCP is a Model Context Protocol server that exposes Sentry's functionality to AI assistants. It provides tools that enable LLMs to interact with Sentry's error tracking, performance monitoring, and other features.
+
+## Package Structure
+
+The project is a pnpm monorepo with clear separation of concerns:
+
+```
+packages/
+βββ mcp-server/ # Core MCP server (npm package)
+βββ mcp-cloudflare/ # Cloudflare Workers deployment
+βββ mcp-server-evals/ # Evaluation test suite
+βββ mcp-server-mocks/ # Shared mock data
+βββ mcp-test-client/ # Interactive CLI client
+```
+
+### packages/mcp-server
+
+The core MCP implementation and npm-publishable package.
+
+**Structure:**
+
+```
+src/
+βββ api-client/ # Sentry API client
+βββ internal/ # Utilities (not exposed)
+βββ transports/ # Transport implementations
+βββ tools/ # Tool implementations
+βββ toolDefinitions.ts # Tool schemas and metadata
+βββ server.ts # MCP server configuration
+βββ errors.ts # Custom error types
+βββ types.ts # TypeScript definitions
+```
+
+**Key responsibilities:**
+
+- Implements MCP protocol using the official SDK
+- Provides tools for Sentry operations (issues, projects, etc.)
+- Handles authentication and API communication
+- Formats responses for LLM consumption
+
+### packages/mcp-cloudflare
+
+A separate web chat application that uses the MCP server.
+
+**Note**: This is NOT part of the MCP server itself - it's a demonstration of how to build a chat interface that consumes MCP.
+
+See "Overview" in @docs/cloudflare/overview.md for details.
+
+### packages/mcp-server-evals
+
+Evaluation tests that verify real Sentry operations.
+
+**Uses:**
+
+- Vercel AI SDK for LLM integration
+- Real Sentry API calls with mocked responses
+- Factuality scoring for output validation
+
+### packages/mcp-server-mocks
+
+Centralized mock data and MSW handlers.
+
+**Provides:**
+
+- Fixture data for all Sentry entities
+- MSW request handlers
+- Shared test utilities
+
+### packages/mcp-test-client
+
+Interactive CLI for testing the MCP server with an AI agent.
+
+**Key features:**
+
+- Vercel AI SDK integration with Anthropic
+- Interactive and single-prompt modes
+- OAuth authentication for remote servers
+- Stdio transport for local testing
+- Clean terminal output with tool call visualization
+
+## Key Architectural Decisions
+
+### 1. Protocol Implementation
+
+Uses the official MCP SDK (`@modelcontextprotocol/sdk`) to ensure compatibility:
+
+```typescript
+const server = new Server({
+ name: "sentry-mcp",
+ version: "1.0.0"
+});
+
+server.setRequestHandler(ListToolsRequestSchema, () => ({
+ tools: TOOL_DEFINITIONS
+}));
+```
+
+### 2. Transport Layers
+
+The MCP server supports multiple transport mechanisms:
+
+**Stdio Transport** (Primary):
+
+- Direct process communication
+- Used by IDEs (Cursor, VS Code) and local tools
+- Configured via command-line args
+- This is the standard MCP transport
+
+**HTTP Transport** (For web apps):
+
+- Allows web applications to connect to MCP via HTTP streaming
+- Used by the example Cloudflare chat app
+- Main endpoint: `/mcp`
+- Not part of core MCP spec
+
+**SSE Transport** (Deprecated - will be removed):
+
+- Legacy Server-Sent Events transport
+- Endpoint: `/sse`
+- Does not support organization/project constraints
+- New integrations should use HTTP transport via `/mcp` endpoint
+
+### 3. Authentication Strategy
+
+The MCP server uses Sentry access tokens for authentication:
+
+```bash
+# Via command line
+pnpm start:stdio --access-token= --host=
+
+# Via environment variable
+SENTRY_ACCESS_TOKEN= pnpm start:stdio
+```
+
+For web applications using MCP (like the Cloudflare example), they handle their own authentication and pass tokens to the MCP server.
+
+### 4. API Client Design
+
+Centralized client with method-specific implementations:
+
+```typescript
+class SentryApiService {
+ constructor(private config: { host: string; accessToken: string })
+
+ // Resource-specific methods
+ issues = {
+ list: (params) => this.fetch("/issues/", params),
+ get: (params) => this.fetch(`/issues/${id}/`)
+ }
+}
+```
+
+### 5. Tool Design Pattern
+
+Each tool follows a consistent structure:
+
+1. **Definition** (toolDefinitions.ts):
+ - Schema with Zod
+ - LLM-friendly description
+ - Parameter documentation
+
+2. **Handler** (tools.ts):
+ - Parameter validation
+ - API calls via client
+ - Response formatting
+ - Error handling
+
+3. **Testing**:
+ - Unit tests with snapshots
+ - Mock API responses
+ - Evaluation tests
+
+**Tool Count Constraints:**
+
+- AI agents have a 45 tool limit (Cursor, etc.)
+- Sentry MCP must stay under 25 tools (target: ~20)
+- Consolidate functionality where possible
+- Consider parameter variants over new tools
+
+### 6. Error Handling Philosophy
+
+Two-tier error system:
+
+- **UserInputError**: Invalid parameters, clear user feedback
+- **System errors**: Logged to Sentry, generic message to user
+
+### 7. Build System
+
+Turbo for monorepo orchestration:
+
+- Dependency-aware builds
+- Parallel task execution
+- Shared TypeScript configs
+- Centralized linting/formatting
+
+## Data Flow
+
+```
+1. LLM makes tool call
+ β
+2. MCP server receives request
+ β
+3. Handler validates parameters
+ β
+4. API client makes Sentry call
+ β
+5. Response formatted for LLM
+ β
+6. MCP sends response back
+```
+
+## MCP Concept Mappings
+
+### Tools
+
+Execute actions and retrieve data:
+
+- `find_issues` - Search for issues
+- `get_project_details` - Fetch project info
+- `create_issue_comment` - Add comments
+- `search_docs` - Search Sentry documentation
+- `get_doc` - Fetch full documentation pages
+
+## Performance Considerations
+
+- Stateless server design
+- No caching between requests
+- Streaming responses where applicable
+- Parallel API calls when possible
+
+## Two-Tier Agent Architecture
+
+Some tools (`search_events` and `search_issues`) implement a two-tier agent pattern:
+
+### Tier 1: Calling Agent (Claude/Cursor)
+- Decides when to use search tools
+- Provides natural language queries
+- Handles errors and retries
+- Interprets results for the user
+
+### Tier 2: Embedded Agent (GPT-5)
+- Lives inside the MCP tool
+- Translates natural language to Sentry query syntax
+- Has its own tools for field discovery
+- Returns structured query objects
+
+### Data Flow Example
+
+```
+1. User: "Show me errors from yesterday"
+ β
+2. Claude: Calls search_events(naturalLanguageQuery="errors from yesterday")
+ β
+3. MCP Tool Handler: Receives request
+ β
+4. Embedded Agent (GPT-5):
+ - Determines dataset: "errors"
+ - Calls datasetAttributes tool
+ - Translates to: {query: "", fields: [...], timeRange: {statsPeriod: "24h"}}
+ β
+5. MCP Tool: Executes Sentry API call
+ β
+6. Results formatted and returned to Claude
+ β
+7. Claude: Presents results to user
+```
+
+### Design Rationale
+
+This pattern is used when:
+- Query languages are complex (Sentry's search syntax)
+- Available fields vary by context (project-specific attributes)
+- Semantic understanding is required ("yesterday" β timeRange)
+
+### Error Handling
+
+- Embedded agent errors are returned as UserInputError
+- Calling agent sees the error and can retry
+- No internal retry loops - single responsibility
+
+### use_sentry Tool Architecture
+
+The `use_sentry` tool provides a natural language interface to all Sentry MCP tools using an in-memory MCP client-server architecture:
+
+**Architecture**:
+1. Creates linked pair of `InMemoryTransport` from MCP SDK
+2. Builds internal MCP server with all 18 tools (excludes use_sentry to prevent recursion)
+3. Connects server to serverTransport within ServerContext
+4. Creates MCP client with clientTransport
+5. Embedded GPT-5 agent accesses tools through MCP protocol
+6. Zero network overhead - all communication is in-memory
+
+**Data Flow**:
+```
+User request β use_sentry handler
+ β
+Creates InMemoryTransport pair
+ β
+Builds internal MCP server (18 tools)
+ β
+Creates MCP client
+ β
+Embedded agent calls tools via MCP protocol
+ β
+MCP server executes tool handlers
+ β
+Results returned through MCP protocol
+ β
+Agent processes and returns final result
+```
+
+**Benefits**:
+- Full MCP protocol compliance throughout
+- Architectural consistency - all tool access via MCP
+- Zero performance overhead (no network, no serialization)
+- Proper tool isolation at protocol level
+- No recursion risk (use_sentry excluded from internal server)
+
+**Implementation**: Uses built-in `InMemoryTransport.createLinkedPair()` from `@modelcontextprotocol/sdk/inMemory.js` for reliable in-process communication.
+
+## Security Model
+
+- Access tokens never logged
+- OAuth tokens encrypted in KV
+- Per-organization isolation
+- CORS configured for security
+
+## Testing Architecture
+
+Three levels of testing:
+
+1. **Unit tests**: Fast, isolated, snapshot-based
+2. **Integration tests**: With mocked API
+3. **Evaluation tests**: Real-world scenarios with LLM
+
+## References
+
+- MCP SDK: `@modelcontextprotocol/sdk`
+- Build config: `turbo.json`
+- TypeScript config: `packages/mcp-server-tsconfig/`
+- API client: `packages/mcp-server/src/api-client/`
diff --git a/docs/cloudflare/architecture.md b/docs/cloudflare/architecture.md
new file mode 100644
index 000000000..0a367f6c7
--- /dev/null
+++ b/docs/cloudflare/architecture.md
@@ -0,0 +1,189 @@
+# Cloudflare Chat Agent Architecture
+
+Technical architecture for the web-based chat interface hosted on Cloudflare Workers.
+
+## Overview
+
+The Cloudflare chat agent provides a web interface for interacting with Sentry through an AI assistant. It's built as a full-stack application using:
+
+- **Frontend**: React with Tailwind CSS
+- **Backend**: Cloudflare Workers with Hono framework
+- **AI**: OpenAI GPT-4 via Vercel AI SDK
+- **MCP Integration**: HTTP transport to core MCP server
+
+## Package Structure
+
+```
+packages/mcp-cloudflare/
+βββ src/
+β βββ client/ # React frontend
+β β βββ components/ # UI components
+β β βββ contexts/ # React contexts
+β β βββ hooks/ # Custom React hooks
+β β βββ utils/ # Client utilities
+β βββ server/ # Cloudflare Workers backend
+β βββ lib/ # Server libraries
+β βββ routes/ # API routes
+β βββ types/ # TypeScript types
+β βββ utils/ # Server utilities
+βββ public/ # Static assets
+βββ wrangler.toml # Cloudflare configuration
+```
+
+## Key Components
+
+### 1. OAuth Authentication
+
+Handles Sentry OAuth flow for user authentication:
+
+```typescript
+// server/routes/auth.ts
+export default new Hono()
+ .get("/login", handleOAuthLogin)
+ .get("/callback", handleOAuthCallback)
+ .post("/logout", handleLogout);
+```
+
+**Features:**
+- OAuth 2.0 flow with Sentry
+- Token storage in Cloudflare KV
+- Automatic token refresh
+- Per-organization access control
+
+### 2. Chat Interface
+
+React-based chat UI with real-time streaming:
+
+```typescript
+// client/components/chat/chat.tsx
+export function Chat() {
+ const { messages, handleSubmit } = useChat({
+ api: "/api/chat",
+ headers: { Authorization: `Bearer ${authToken}` }
+ });
+}
+```
+
+**Features:**
+- Message streaming with Vercel AI SDK
+- Tool call visualization
+- Slash commands (/help, /prompts, /clear)
+- Prompt parameter dialogs
+- Markdown rendering with syntax highlighting
+
+### 3. MCP Integration
+
+Connects to the core MCP server via HTTP transport:
+
+```typescript
+// server/routes/chat.ts
+const mcpClient = await experimental_createMCPClient({
+ name: "sentry",
+ transport: {
+ type: "sse",
+ url: sseUrl,
+ headers: { Authorization: `Bearer ${accessToken}` }
+ }
+});
+```
+
+**Features:**
+- Server-sent events (SSE) for MCP communication
+- Automatic tool discovery
+- Prompt metadata endpoint
+- Error handling with fallbacks
+
+### 4. AI Assistant
+
+GPT-4 integration with Sentry-specific system prompt:
+
+```typescript
+const result = streamText({
+ model: openai("gpt-4o"),
+ messages: processedMessages,
+ tools: mcpTools,
+ system: "You are an AI assistant for testing Sentry MCP..."
+});
+```
+
+**Features:**
+- Streaming responses
+- Tool execution
+- Prompt template processing
+- Context-aware assistance
+
+## Data Flow
+
+1. **User Authentication**:
+ ```
+ User β OAuth Login β Sentry β OAuth Callback β KV Storage
+ ```
+
+2. **Chat Message Flow**:
+ ```
+ User Input β Chat API β Process Prompts β AI Model β Stream Response
+ β
+ MCP Server β Tool Calls
+ ```
+
+3. **MCP Communication**:
+ ```
+ Chat Server β SSE Transport β MCP Server β Sentry API
+ ```
+
+## Deployment Architecture
+
+### Cloudflare Resources
+
+- **Workers**: Serverless compute for API routes
+- **Pages**: Static asset hosting for React app
+- **KV Namespace**: OAuth token storage
+- **AI Binding**: Access to Cloudflare AI models (AutoRAG for docs search)
+- **R2**: File storage (future)
+
+### Environment Variables
+
+Required for deployment:
+
+```toml
+[vars]
+COOKIE_SECRET = "..." # For session encryption
+OPENAI_API_KEY = "..." # For GPT-4 access
+SENTRY_CLIENT_ID = "..." # OAuth app ID
+SENTRY_CLIENT_SECRET = "..." # OAuth app secret
+```
+
+### API Routes
+
+- `/api/auth/*` - Authentication endpoints
+- `/api/chat` - Main chat endpoint
+- `/api/metadata` - MCP metadata endpoint
+- `/sse` - Server-sent events for MCP
+
+## Security Considerations
+
+1. **Authentication**: OAuth tokens stored encrypted in KV
+2. **Authorization**: Per-organization access control
+3. **Rate Limiting**: Cloudflare rate limiter integration
+4. **CORS**: Restricted to same-origin requests
+5. **CSP**: Content Security Policy headers
+
+## Performance Optimizations
+
+1. **Edge Computing**: Runs at Cloudflare edge locations
+2. **Caching**: Metadata endpoint with cache headers
+3. **Streaming**: Server-sent events for real-time updates
+4. **Bundle Splitting**: Optimized React build
+
+## Monitoring
+
+- Sentry integration for error tracking
+- Cloudflare Analytics for usage metrics
+- Custom telemetry for MCP operations
+
+## Related Documentation
+
+- See "OAuth Architecture" in @docs/cloudflare/oauth-architecture.md
+- See "Chat Interface" in @docs/cloudflare/architecture.md
+- See "Deployment" in @docs/cloudflare/deployment.md
+- See "Architecture" in @docs/architecture.mdc
diff --git a/docs/cloudflare/oauth-architecture.md b/docs/cloudflare/oauth-architecture.md
new file mode 100644
index 000000000..8a8221af8
--- /dev/null
+++ b/docs/cloudflare/oauth-architecture.md
@@ -0,0 +1,404 @@
+# OAuth Architecture: MCP OAuth vs Sentry OAuth
+
+## Two Separate OAuth Systems
+
+The Sentry MCP implementation involves **two completely separate OAuth providers**:
+
+### 1. MCP OAuth Provider (Our Server)
+- **What it is**: Our own OAuth 2.0 server built with `@cloudflare/workers-oauth-provider`
+- **Purpose**: Authenticates MCP clients (like Cursor, VS Code, etc.)
+- **Tokens issued**: MCP access tokens and MCP refresh tokens
+- **Storage**: Uses Cloudflare KV to store encrypted tokens
+- **Endpoints**: `/oauth/register`, `/oauth/authorize`, `/oauth/token`
+
+### 2. Sentry OAuth Provider (Sentry's Server)
+- **What it is**: Sentry's official OAuth 2.0 server at `sentry.io`
+- **Purpose**: Authenticates users and grants API access to Sentry
+- **Tokens issued**: Sentry access tokens and Sentry refresh tokens
+- **Storage**: Tokens are stored encrypted within MCP's token props
+- **Endpoints**: `https://sentry.io/oauth/authorize/`, `https://sentry.io/oauth/token/`
+
+## High-Level Flow
+
+The system uses a dual-token approach:
+1. **MCP clients** authenticate with **MCP OAuth** to get MCP tokens
+2. **MCP OAuth** authenticates with **Sentry OAuth** to get Sentry tokens
+3. **MCP tokens** contain encrypted **Sentry tokens** in their payload
+4. When serving MCP requests, the server uses Sentry tokens to call Sentry's API
+
+### Complete Flow Diagram
+
+```mermaid
+sequenceDiagram
+ participant Client as MCP Client (Cursor)
+ participant MCPOAuth as MCP OAuth Provider (Our Server)
+ participant MCP as MCP Server (Stateless Handler)
+ participant SentryOAuth as Sentry OAuth Provider (sentry.io)
+ participant SentryAPI as Sentry API
+ participant User as User
+
+ Note over Client,SentryAPI: Initial Client Registration
+ Client->>MCPOAuth: Register as OAuth client
+ MCPOAuth-->>Client: MCP Client ID & Secret
+
+ Note over Client,SentryAPI: User Authorization Flow
+ Client->>MCPOAuth: Request authorization
+ MCPOAuth->>User: Show MCP consent screen
+ User->>MCPOAuth: Approve MCP permissions
+ MCPOAuth->>SentryOAuth: Redirect to Sentry OAuth
+ SentryOAuth->>User: Sentry login page
+ User->>SentryOAuth: Authenticate with Sentry
+ SentryOAuth-->>MCPOAuth: Sentry auth code
+ MCPOAuth->>SentryOAuth: Exchange code for tokens
+ SentryOAuth-->>MCPOAuth: Sentry access + refresh tokens
+ MCPOAuth-->>Client: MCP access token (contains encrypted Sentry tokens)
+
+ Note over Client,SentryAPI: Using MCP Protocol
+ Client->>MCP: MCP request with MCP Bearer token
+ MCP->>MCPOAuth: Validate MCP token
+ MCPOAuth-->>MCP: Decrypted props (includes Sentry tokens)
+ MCP->>SentryAPI: API call with Sentry Bearer token
+ SentryAPI-->>MCP: API response
+ MCP-->>Client: MCP response
+
+ Note over Client,SentryAPI: Token Refresh
+ Client->>MCPOAuth: POST /oauth/token (MCP refresh_token)
+ MCPOAuth->>MCPOAuth: Check Sentry token expiry
+ alt Sentry token still valid
+ MCPOAuth-->>Client: New MCP token (reusing cached Sentry token)
+ else Sentry token expired
+ MCPOAuth->>SentryOAuth: Refresh Sentry token
+ SentryOAuth-->>MCPOAuth: New Sentry tokens
+ MCPOAuth-->>Client: New MCP token (with new Sentry tokens)
+ end
+```
+
+## Key Concepts
+
+### Token Types
+
+| Token Type | Issued By | Used By | Contains | Purpose |
+|------------|-----------|---------|----------|----------|
+| **MCP Access Token** | MCP OAuth Provider | MCP Clients | Encrypted Sentry tokens | Authenticate to MCP Server |
+| **MCP Refresh Token** | MCP OAuth Provider | MCP Clients | Grant reference | Refresh MCP access tokens |
+| **Sentry Access Token** | Sentry OAuth | MCP Server | User credentials | Call Sentry API |
+| **Sentry Refresh Token** | Sentry OAuth | MCP OAuth Provider | Refresh credentials | Refresh Sentry tokens |
+
+### Not a Simple Proxy
+
+**Important**: MCP is NOT an HTTP proxy that forwards requests. Instead:
+- MCP implements the **Model Context Protocol** (tools, prompts, resources)
+- Clients send MCP protocol messages, not HTTP requests
+- MCP Server executes these commands using Sentry's API
+- Responses are MCP protocol messages, not raw HTTP responses
+
+## Technical Implementation
+
+### MCP OAuth Provider Details
+
+The MCP OAuth Provider is built with `@cloudflare/workers-oauth-provider` and provides:
+
+1. **Dynamic client registration** - MCP clients can register on-demand
+2. **PKCE support** - Secure authorization code flow
+3. **Token management** - Issues and validates MCP tokens
+4. **Consent UI** - Custom approval screen for permissions
+5. **Token encryption** - Stores Sentry tokens encrypted in MCP token props
+
+### Sentry OAuth Integration
+
+The integration with Sentry OAuth happens through:
+
+1. **Authorization redirect** - After MCP consent, redirect to Sentry OAuth
+2. **Code exchange** - Exchange Sentry auth code for tokens
+3. **Token storage** - Store Sentry tokens in MCP token props
+4. **Token refresh** - Use Sentry refresh tokens to get new access tokens
+
+## Key Concepts
+
+### How the MCP OAuth Provider Works
+
+```mermaid
+sequenceDiagram
+ participant Agent as AI Agent
+ participant MCPOAuth as MCP OAuth Provider
+ participant KV as Cloudflare KV
+ participant User as User
+ participant MCP as MCP Server
+
+ Agent->>MCPOAuth: Register as client
+ MCPOAuth->>KV: Store client registration
+ MCPOAuth-->>Agent: MCP Client ID & Secret
+
+ Agent->>MCPOAuth: Request authorization
+ MCPOAuth->>User: Show MCP consent screen
+ User->>MCPOAuth: Approve
+ MCPOAuth->>KV: Store grant
+ MCPOAuth-->>Agent: Authorization code
+
+ Agent->>MCPOAuth: Exchange code for MCP token
+ MCPOAuth->>KV: Validate grant
+ MCPOAuth->>KV: Store encrypted MCP token
+ MCPOAuth-->>Agent: MCP access token
+
+ Agent->>MCP: MCP protocol request with MCP token
+ MCP->>MCPOAuth: Validate MCP token
+ MCPOAuth->>KV: Lookup MCP token
+ MCPOAuth-->>MCP: Decrypted props (includes Sentry tokens)
+ MCP-->>Agent: MCP protocol response
+```
+
+## Implementation Details
+
+### 1. MCP OAuth Provider Configuration
+
+The MCP OAuth Provider is configured in `src/server/index.ts`:
+
+```typescript
+const oAuthProvider = new OAuthProvider({
+ apiHandlers: {
+ "/sse": createMcpHandler("/sse", true),
+ "/mcp": createMcpHandler("/mcp", false),
+ },
+ defaultHandler: app, // Hono app for non-OAuth routes
+ authorizeEndpoint: "/oauth/authorize",
+ tokenEndpoint: "/oauth/token",
+ clientRegistrationEndpoint: "/oauth/register",
+ scopesSupported: Object.keys(SCOPES),
+});
+```
+
+### 2. API Handler
+
+The `apiHandler` is a protected endpoint that requires valid OAuth tokens:
+
+- `/mcp` - MCP protocol endpoint (HTTP transport)
+
+The handler receives:
+- `request`: The incoming request
+- `env`: Cloudflare environment bindings
+- `ctx`: Execution context with `ctx.props` containing decrypted user data
+
+### 3. Token Structure
+
+MCP tokens contain encrypted properties including Sentry tokens:
+
+```typescript
+interface WorkerProps {
+ id: string; // Sentry user ID
+ name: string; // User name
+ accessToken: string; // Sentry access token
+ refreshToken?: string; // Sentry refresh token
+ accessTokenExpiresAt?: number; // Sentry token expiry timestamp
+ scope: string; // MCP permissions granted
+ grantedScopes?: string[]; // Sentry API scopes
+}
+```
+
+### 4. URL Constraints Challenge
+
+#### The Problem
+
+The MCP server needs to support URL-based constraints like `/mcp/sentry/javascript` to limit agent access to specific organizations/projects. However:
+
+1. OAuth Provider only does prefix matching (`/mcp` matches `/mcp/*`)
+2. The MCP handler needs to extract constraints from URL paths
+3. URL path parameters must be preserved through the OAuth middleware
+
+#### The Solution
+
+The MCP handler parses URL path segments to extract organization and project constraints:
+
+**Example URLs:**
+- `/mcp` - No constraints (full access within granted scopes)
+- `/mcp/sentry` - Organization constraint (limited to "sentry" org)
+- `/mcp/sentry/javascript` - Organization + project constraints
+
+The handler extracts these constraints, combines them with authentication data from the OAuth provider (via ExecutionContext), and builds the complete ServerContext. This context determines which resources tools can access.
+
+## Storage (KV Namespace)
+
+The MCP OAuth Provider uses `OAUTH_KV` namespace to store:
+
+1. **MCP Client registrations**: `client:{clientId}` - MCP OAuth client details
+2. **MCP Authorization grants**: `grant:{userId}:{grantId}` - User consent records for MCP
+3. **MCP Access tokens**: `token:{userId}:{grantId}:{tokenId}` - Encrypted MCP tokens (contains Sentry tokens)
+4. **MCP Refresh tokens**: `refresh:{userId}:{grantId}:{refreshId}` - For MCP token renewal
+
+### Token Storage Structure
+
+When a user completes the full OAuth flow, the MCP OAuth Provider stores Sentry tokens inside MCP token props:
+
+```typescript
+// In /oauth/callback after exchanging code with Sentry
+const { redirectTo } = await c.env.OAUTH_PROVIDER.completeAuthorization({
+ // ... other params
+ props: {
+ id: payload.user.id, // From Sentry
+ name: payload.user.name, // From Sentry
+ accessToken: payload.access_token, // Sentry's access token
+ refreshToken: payload.refresh_token, // Sentry's refresh token
+ accessTokenExpiresAt: Date.now() + payload.expires_in * 1000,
+ scope: oauthReqInfo.scope.join(" "), // MCP scopes
+ grantedScopes: Array.from(grantedScopes), // Sentry API scopes
+ // ... other fields
+ }
+});
+```
+
+## Token Refresh Implementation
+
+### Dual Refresh Token System
+
+The system maintains two separate refresh flows:
+
+1. **MCP Token Refresh**: When MCP clients need new MCP access tokens
+2. **Sentry Token Refresh**: When Sentry access tokens expire (handled internally)
+
+### MCP Token Refresh Flow
+
+When an MCP client's token expires:
+
+1. Client sends refresh request to MCP OAuth: `POST /oauth/token` with MCP refresh token
+2. MCP OAuth invokes `tokenExchangeCallback` function
+3. Callback checks if cached Sentry token is still valid (with 2-minute safety window)
+4. If Sentry token is valid, returns new MCP token with cached Sentry token
+5. If Sentry token expired, refreshes with Sentry OAuth and updates storage
+
+### Token Exchange Callback Implementation
+
+```typescript
+// tokenExchangeCallback in src/server/oauth/helpers.ts
+export async function tokenExchangeCallback(options, env) {
+ // Only handle MCP refresh_token requests
+ if (options.grantType !== "refresh_token") {
+ return undefined;
+ }
+
+ // Extract Sentry refresh token from MCP token props
+ const sentryRefreshToken = options.props.refreshToken;
+ if (!sentryRefreshToken) {
+ throw new Error("No Sentry refresh token available in stored props");
+ }
+
+ // Smart caching: Check if Sentry token is still valid
+ const sentryTokenExpiresAt = props.accessTokenExpiresAt;
+ if (sentryTokenExpiresAt && Number.isFinite(sentryTokenExpiresAt)) {
+ const remainingMs = sentryTokenExpiresAt - Date.now();
+ const SAFE_WINDOW_MS = 2 * 60 * 1000; // 2 minutes safety
+
+ if (remainingMs > SAFE_WINDOW_MS) {
+ // Sentry token still valid - return new MCP token with cached Sentry token
+ return {
+ newProps: { ...options.props },
+ accessTokenTTL: Math.floor(remainingMs / 1000),
+ };
+ }
+ }
+
+ // Sentry token expired - refresh with Sentry OAuth
+ const [sentryTokens, errorResponse] = await refreshAccessToken({
+ client_id: env.SENTRY_CLIENT_ID,
+ client_secret: env.SENTRY_CLIENT_SECRET,
+ refresh_token: sentryRefreshToken,
+ upstream_url: "https://sentry.io/oauth/token/",
+ });
+
+ // Update MCP token props with new Sentry tokens
+ return {
+ newProps: {
+ ...options.props,
+ accessToken: sentryTokens.access_token, // New Sentry access token
+ refreshToken: sentryTokens.refresh_token, // New Sentry refresh token
+ accessTokenExpiresAt: Date.now() + sentryTokens.expires_in * 1000,
+ },
+ accessTokenTTL: sentryTokens.expires_in,
+ };
+}
+```
+
+### Error Scenarios
+
+1. **Missing Sentry Refresh Token**:
+ - Error: "No Sentry refresh token available in stored props"
+ - Resolution: Client must re-authenticate through full OAuth flow
+
+2. **Sentry Refresh Token Invalid**:
+ - Error: Sentry OAuth returns 401/400
+ - Resolution: Client must re-authenticate with both MCP and Sentry
+
+3. **Network Failures**:
+ - Error: Cannot reach Sentry OAuth endpoint
+ - Resolution: Retry with exponential backoff or re-authenticate
+
+The 2-minute safety window prevents edge cases with clock skew and processing delays between MCP and Sentry.
+
+## Security Features
+
+1. **PKCE**: MCP OAuth uses PKCE to prevent authorization code interception
+2. **Token encryption**: Sentry tokens encrypted within MCP tokens using WebCrypto
+3. **Dual consent**: Users approve both MCP permissions and Sentry access
+4. **Scope enforcement**: Both MCP and Sentry scopes limit access
+5. **Token expiration**: Both MCP and Sentry tokens have expiry times
+6. **Refresh token rotation**: Sentry issues new refresh tokens on each refresh
+
+## Discovery Endpoints
+
+The MCP OAuth Provider automatically provides:
+
+- `/.well-known/oauth-authorization-server` - MCP OAuth server metadata
+- `/.well-known/oauth-protected-resource` - MCP resource server info
+
+Note: These describe the MCP OAuth server, not Sentry's OAuth endpoints.
+
+## Integration Between MCP OAuth and MCP Server
+
+The MCP Server (stateless handler) receives context via closure capture:
+
+1. **Props via ExecutionContext**: Decrypted data from MCP token (includes Sentry tokens)
+2. **Constraints from URL**: Organization/project limits parsed from URL path
+3. **Context capture**: Server built with context, captured in tool handler closures
+
+The MCP Server then uses the Sentry access token from context to make Sentry API calls.
+
+## Limitations
+
+1. **No direct Hono integration**: OAuth Provider expects specific handler signatures
+2. **Constraint extraction**: Must parse URL segments to extract organization/project constraints
+
+## Why Use Two OAuth Systems?
+
+### Benefits of the Dual OAuth Approach
+
+1. **Security isolation**: MCP clients never see Sentry tokens directly
+2. **Token management**: MCP can refresh Sentry tokens transparently
+3. **Permission layering**: MCP permissions separate from Sentry API scopes
+4. **Client flexibility**: MCP clients don't need to understand Sentry OAuth
+
+### Why Not Direct Sentry OAuth?
+
+If MCP clients used Sentry OAuth directly:
+- Clients would need to manage Sentry token refresh
+- No way to add MCP-specific permissions
+- Clients would have raw Sentry API access (security risk)
+- No centralized token management
+
+### Implementation Complexity
+
+The MCP OAuth Provider (via `@cloudflare/workers-oauth-provider`) provides:
+- OAuth 2.0 authorization flows
+- Dynamic client registration
+- Token issuance and validation
+- PKCE support
+- Consent UI
+- Token encryption
+- KV storage
+- Discovery endpoints
+
+Reimplementing this would be complex and error-prone.
+
+## Related Documentation
+
+- [Cloudflare OAuth Provider](https://github.com/cloudflare/workers-oauth-provider)
+- [OAuth 2.0 Specification](https://oauth.net/2/)
+- [Dynamic Client Registration](https://www.rfc-editor.org/rfc/rfc7591.html)
+- [PKCE](https://www.rfc-editor.org/rfc/rfc7636)
\ No newline at end of file
diff --git a/docs/cloudflare/overview.md b/docs/cloudflare/overview.md
new file mode 100644
index 000000000..3609aaf7a
--- /dev/null
+++ b/docs/cloudflare/overview.md
@@ -0,0 +1,47 @@
+# Cloudflare Web Chat Application
+
+This directory contains documentation for the Cloudflare-hosted web chat application that **uses** the Sentry MCP server.
+
+## Important: This is NOT part of MCP
+
+The Cloudflare chat application (`packages/mcp-cloudflare`) is a **separate web application** that demonstrates how to build a chat interface using MCP. It is not part of the MCP protocol or server itself.
+
+Think of it as:
+- **MCP Server**: The backend service that provides Sentry functionality via the Model Context Protocol
+- **Cloudflare Chat**: A frontend web app (like ChatGPT) that connects to and uses the MCP server
+
+## What This Application Provides
+
+- Web-based chat UI with OAuth authentication
+- AI-powered assistant using OpenAI's GPT-4
+- Integration with Sentry MCP tools via HTTP transport
+- Cloudflare Workers deployment for global edge hosting
+
+## Architecture Separation
+
+```
+βββββββββββββββββββββββββββ ββββββββββββββββββββββββ
+β Cloudflare Web App β β MCP Server β
+β (This Documentation) β β (Core MCP Docs) β
+βββββββββββββββββββββββββββ€ ββββββββββββββββββββββββ€
+β β’ React Frontend β β β’ MCP Protocol β
+β β’ Chat UI β --> β β’ Sentry Tools β
+β β’ OAuth Flow β β β
+β β’ GPT-4 Integration β β β
+βββββββββββββββββββββββββββ ββββββββββββββββββββββββ
+ Uses MCP via The actual MCP
+ HTTP Transport implementation
+```
+
+## Documentation Structure
+
+- Architecture: @docs/cloudflare/architecture.md β Technical architecture of the web application
+- OAuth Architecture: @docs/cloudflare/oauth-architecture.md β OAuth flow and token management
+- Chat Interface: @docs/cloudflare/architecture.md β See "Chat Interface" section
+- Deployment: @docs/cloudflare/deployment.md β Deploying to Cloudflare Workers
+
+## Quick Links
+
+- Live deployment: https://mcp.sentry.dev
+- Package location: `packages/mcp-cloudflare`
+- **For MCP Server docs**: See "Architecture" in @docs/architecture.mdc
diff --git a/docs/coding-guidelines.mdc b/docs/coding-guidelines.mdc
new file mode 100644
index 000000000..139d736f4
--- /dev/null
+++ b/docs/coding-guidelines.mdc
@@ -0,0 +1,144 @@
+# Coding Guidelines
+
+Essential patterns and standards for Sentry MCP development.
+
+## TypeScript Configuration
+
+```typescript
+// tsconfig.json essentials
+{
+ "compilerOptions": {
+ "strict": true,
+ "target": "ES2022",
+ "module": "ESNext",
+ "moduleResolution": "Bundler",
+ "sourceMap": true,
+ "noImplicitAny": true
+ }
+}
+```
+
+## Code Style
+
+### Biome Configuration
+- 2 spaces, double quotes, semicolons
+- Max line: 100 chars
+- Trailing commas in multiline
+
+### Naming Conventions
+- Files: `kebab-case.ts`
+- Functions: `camelCase`
+- Types/Classes: `PascalCase`
+- Constants: `UPPER_SNAKE_CASE`
+
+### Import Order
+```typescript
+// 1. Node built-ins
+import { readFile } from "node:fs/promises";
+// 2. External deps
+import { z } from "zod";
+// 3. Internal packages
+import { mockData } from "@sentry-mcp/mocks";
+// 4. Relative imports
+import { UserInputError } from "./errors.js";
+```
+
+## Tool Implementation
+
+```typescript
+export const toolName = {
+ description: "Clear, concise description",
+ parameters: z.object({
+ required: z.string().describe("Description"),
+ optional: z.string().optional()
+ }),
+ execute: async (params, context) => {
+ // 1. Validate inputs
+ // 2. Call API
+ // 3. Format output
+ return formatResponse(data);
+ }
+};
+```
+
+## Testing Standards
+
+```typescript
+describe("Component", () => {
+ it("handles normal case", async () => {
+ // Arrange
+ const input = createTestInput();
+
+ // Act
+ const result = await method(input);
+
+ // Assert
+ expect(result).toMatchInlineSnapshot();
+ });
+});
+```
+
+Key practices:
+- Use inline snapshots for formatting
+- Mock with MSW
+- Test success and error paths
+- Keep tests isolated
+
+## Quality Checklist
+
+Before committing:
+```bash
+pnpm -w run lint # Biome check
+pnpm -w run lint:fix # Fix issues
+pnpm tsc --noEmit # Type check
+pnpm test # Run tests
+pnpm -w run build # Build all
+```
+
+## JSDoc Pattern
+
+```typescript
+/**
+ * Brief description.
+ *
+ * @param param - Description
+ * @returns What it returns
+ *
+ * @example
+ * ```typescript
+ * const result = func(param);
+ * ```
+ */
+```
+
+## Security Essentials
+
+- Never commit secrets
+- Validate all inputs
+- Use environment variables
+- Sanitize displayed data
+
+## Common Patterns
+
+For shared patterns see:
+- Error handling: `common-patterns.mdc#error-handling`
+- Zod schemas: `common-patterns.mdc#zod-schema-patterns`
+- API usage: `api-patterns.mdc`
+- Testing: `testing.mdc`
+
+## Monorepo Commands
+
+```bash
+# Workspace-wide (from root)
+pnpm -w run lint
+
+# Package-specific (from package dir)
+pnpm test
+```
+
+## References
+
+- Architecture: `architecture.mdc`
+- Testing guide: `testing.mdc`
+- API patterns: `api-patterns.mdc`
+- Common patterns: `common-patterns.mdc`
\ No newline at end of file
diff --git a/docs/common-patterns.mdc b/docs/common-patterns.mdc
new file mode 100644
index 000000000..ea7a1eb84
--- /dev/null
+++ b/docs/common-patterns.mdc
@@ -0,0 +1,279 @@
+# Common Patterns
+
+Reusable patterns used throughout the Sentry MCP codebase. Reference these instead of duplicating.
+
+## Error Handling
+
+### UserInputError Pattern
+
+For invalid user input that needs clear feedback:
+
+```typescript
+if (!params.organizationSlug) {
+ throw new UserInputError(
+ "Organization slug is required. Please provide an organizationSlug parameter. " +
+ "You can find available organizations using the `find_organizations()` tool."
+ );
+}
+```
+
+See implementation: `packages/mcp-server/src/errors.ts`
+
+### API Error Wrapping
+
+When external API calls fail:
+
+```typescript
+try {
+ const data = await apiService.issues.list(params);
+ return data;
+} catch (error) {
+ throw new Error(`Failed to fetch issues: ${error.message}`);
+}
+```
+
+### Error Message Transformation
+
+Make error messages LLM-friendly:
+
+```typescript
+if (message.includes("You do not have the multi project stream feature enabled")) {
+ return "You do not have access to query across multiple projects. Please select a project for your query.";
+}
+```
+
+## Zod Schema Patterns
+
+### Reusable Parameter Schemas
+
+Define once, use everywhere:
+
+```typescript
+export const ParamOrganizationSlug = z
+ .string()
+ .trim()
+ .describe("The organization's slug. You can find a list using the `find_organizations()` tool.");
+
+export const ParamRegionUrl = z
+ .string()
+ .url()
+ .optional()
+ .describe("Sentry region URL. If not provided, uses default region.");
+```
+
+See: `packages/mcp-server/src/schema.ts`
+
+### Flexible Schema Patterns
+
+```typescript
+// Support multiple ID formats
+z.union([z.string(), z.number()])
+
+// Optional with transforms
+z.string().optional().transform(val => val?.trim())
+
+// Partial objects with passthrough
+IssueSchema.partial().passthrough()
+```
+
+### Type Derivation
+
+```typescript
+export type Organization = z.infer;
+export type ToolParams = z.infer;
+```
+
+## Testing Patterns
+
+For comprehensive testing guidance, see `testing.mdc` and `adding-tools.mdc#step-3-add-tests`.
+
+### Unit Test Structure
+
+```typescript
+describe("tool_name", () => {
+ it("returns formatted output", async () => {
+ const result = await TOOL_HANDLERS.tool_name(mockContext, {
+ organizationSlug: "test-org",
+ });
+
+ expect(result).toMatchInlineSnapshot(`
+ "# Results in **test-org**
+
+ Expected formatted output here"
+ `);
+ });
+});
+```
+
+### Snapshot Updates
+
+When tool output changes:
+
+```bash
+cd packages/mcp-server
+pnpm vitest --run -u
+```
+
+### Mock Server Setup
+
+```typescript
+beforeAll(() => mswServer.listen());
+afterEach(() => mswServer.resetHandlers());
+afterAll(() => mswServer.close());
+```
+
+See: `packages/mcp-server/src/test-utils/setup.ts`
+
+## API Patterns
+
+For complete API usage patterns, see `api-patterns.mdc`.
+
+### Service Creation
+
+```typescript
+const apiService = apiServiceFromContext(context, {
+ regionUrl: params.regionUrl,
+});
+```
+
+See: `packages/mcp-server/src/api-utils.ts:apiServiceFromContext`
+
+### Multi-Region Support
+
+```typescript
+if (opts.regionUrl) {
+ try {
+ host = new URL(opts.regionUrl).host;
+ } catch (error) {
+ throw new UserInputError(
+ `Invalid regionUrl provided: ${opts.regionUrl}. Must be a valid URL.`
+ );
+ }
+}
+```
+
+## Response Formatting
+
+### Markdown Structure
+
+```typescript
+let output = `# ${title}\n\n`;
+
+// Handle empty results
+if (data.length === 0) {
+ output += "No results found.\n";
+ return output;
+}
+
+// Add data sections
+output += "## Section\n";
+output += formatData(data);
+
+// Add usage instructions
+output += "\n\n# Using this information\n\n";
+output += "- Next steps...\n";
+```
+
+### Multi-Content Resources
+
+```typescript
+return {
+ contents: [
+ {
+ uri: url.toString(),
+ mimeType: "application/json",
+ text: JSON.stringify(data, null, 2)
+ }
+ ]
+};
+```
+
+## Parameter Validation
+
+### Required Parameters
+
+```typescript
+if (!params.requiredParam) {
+ throw new UserInputError(
+ "Required parameter is missing. Please provide requiredParam."
+ );
+}
+```
+
+### Multiple Options
+
+```typescript
+if (params.issueUrl) {
+ // Extract from URL
+} else if (params.organizationSlug && params.issueId) {
+ // Use direct parameters
+} else {
+ throw new UserInputError(
+ "Either issueUrl or both organizationSlug and issueId must be provided"
+ );
+}
+```
+
+## Mock Patterns
+
+### Basic Handler
+
+```typescript
+{
+ method: "get",
+ path: "/api/0/organizations/:orgSlug/issues/",
+ fetch: ({ params }) => {
+ return HttpResponse.json(issueListFixture);
+ },
+}
+```
+
+### Request Validation
+
+```typescript
+fetch: ({ request, params }) => {
+ const url = new URL(request.url);
+ const sort = url.searchParams.get("sort");
+
+ if (sort && !["date", "freq", "new"].includes(sort)) {
+ return HttpResponse.json("Invalid sort parameter", { status: 400 });
+ }
+
+ return HttpResponse.json(data);
+}
+```
+
+See: `packages/mcp-server-mocks/src/handlers/`
+
+## Quality Checks
+
+Required before any commit:
+
+```bash
+pnpm -w run lint:fix # Fix linting issues
+pnpm tsc --noEmit # TypeScript type checking
+pnpm test # Run all tests
+```
+
+## TypeScript Helpers
+
+### Generic Type Utilities
+
+```typescript
+// Extract Zod schema types from records
+type ZodifyRecord> = {
+ [K in keyof T]: z.infer;
+};
+
+// Const assertions for literal types
+export const TOOL_NAMES = ["tool1", "tool2"] as const;
+export type ToolName = typeof TOOL_NAMES[number];
+```
+
+## References
+
+- Error handling: `packages/mcp-server/src/errors.ts`
+- Schema definitions: `packages/mcp-server/src/schema.ts`
+- API utilities: `packages/mcp-server/src/api-utils.ts`
+- Test setup: `packages/mcp-server/src/test-utils/`
+- Mock handlers: `packages/mcp-server-mocks/src/handlers/`
\ No newline at end of file
diff --git a/docs/cursor.mdc b/docs/cursor.mdc
new file mode 100644
index 000000000..1e76956fd
--- /dev/null
+++ b/docs/cursor.mdc
@@ -0,0 +1,215 @@
+---
+description: Cursor IDE instructions for working with Sentry MCP codebase
+globs:
+alwaysApply: true
+---
+# Cursor IDE Instructions for Sentry MCP
+
+This file provides instructions for Cursor IDE when working with the Sentry MCP codebase.
+
+## Project Overview
+
+Sentry MCP is a Model Context Protocol server that provides access to Sentry's functionality through tools.
+
+## π΄ CRITICAL: Pre-Development Requirements
+
+**MANDATORY READING before code changes:**
+
+### Tool Development
+- MUST read `docs/adding-tools.mdc` before creating/modifying any tool
+- MUST read `docs/testing.mdc` for testing requirements
+- MUST verify tool count limits (target ~20, max 25)
+
+### Code Changes
+- MUST read `docs/common-patterns.mdc` for established patterns
+- MUST read `docs/api-patterns.mdc` for API usage
+
+## Documentation Maintenance Requirements
+
+**MANDATORY: Documentation MUST be updated when making code changes**
+
+### When Documentation MUST Be Updated
+- **Adding new tools**: Update `docs/adding-tools.mdc` if new patterns emerge
+- **Changing testing approaches**: Update `docs/testing.mdc` immediately
+- **Modifying API patterns**: Update `docs/api-patterns.mdc` with new patterns
+- **Adding common patterns**: Update `docs/common-patterns.mdc` immediately
+- **Changing architecture**: Update `docs/architecture.mdc`
+
+### Critical Sync Requirements
+- **AGENTS.md β cursor.mdc**: These files MUST stay synchronized
+- **When updating AGENTS.md**: Also update `cursor.mdc` with equivalent guidance
+- **When updating cursor.mdc**: Also update `AGENTS.md` with equivalent guidance
+- Both files serve the same purpose for different tools (Claude Code vs Cursor IDE)
+
+### Documentation Update Process
+1. **Identify affected docs** while implementing changes
+2. **Update documentation in the same session** as code changes
+3. **Verify cross-references** remain accurate
+4. **Ensure AGENTS.md β cursor.mdc sync** is maintained
+5. **Add examples** for new patterns introduced
+
+**Documentation updates are not optional - they are part of completing any task.**
+
+## Documentation
+
+All documentation is in the `docs/` directory:
+
+### Core References
+- `architecture.mdc` - System design and package structure
+- `common-patterns.mdc` - Reusable code patterns
+- `quality-checks.mdc` - Required quality checks
+
+### Implementation Guides
+- `adding-tools.mdc` - Adding new MCP tools
+
+### Technical References
+- `api-patterns.mdc` - Sentry API client usage
+- `testing.mdc` - Testing strategies
+- `releases/cloudflare.mdc` - Cloudflare Workers release
+- `releases/stdio.mdc` - npm package release
+- `monitoring.mdc` - Observability patterns
+- `security.mdc` - Authentication and security
+
+You should ALWAYS update docs when they are inaccurate or you have learned new relevant information which would add clarity that is otherwise missing.
+
+## Documentation Maintenance
+
+- **Keep AGENTS.md and cursor.mdc concise**: These files are navigation aids, not comprehensive docs
+- **Reference, don't duplicate**: Point to `docs/` files instead of repeating content
+- **Update referenced docs first**: When making changes, update the actual documentation before updating references
+- **Avoid redundancy**: Check existing docs before creating new ones (see `docs/llms/documentation-style-guide.mdc`)
+
+## Tool Count Limits
+
+**IMPORTANT**: AI agents have a hard cap of 45 total tools. Sentry MCP must:
+- Target ~20 tools (current best practice)
+- Never exceed 25 tools (absolute maximum)
+- This limit exists in Cursor and possibly other tools
+
+## Code Validation Requirements
+
+**MANDATORY after ANY code changes:**
+- Run `pnpm run tsc` to verify type safety
+- Run `pnpm run lint` to check code style
+- Run `pnpm run test` for affected components
+- See `docs/quality-checks.mdc` for complete checklist
+
+**Commands to run:**
+
+```bash
+pnpm -w run lint:fix # Fix linting issues
+pnpm tsc --noEmit # Check TypeScript types
+pnpm test # Run all tests
+```
+
+**DO NOT proceed if any check fails.**
+
+## Tool Testing Requirements
+
+**ALL tools MUST have comprehensive tests that verify:**
+
+- **Input validation** - Required/optional parameters, type checking, edge cases
+- **Output formatting** - Markdown structure, content accuracy, error messages
+- **API integration** - Mock server responses, error handling, parameter passing
+- **Snapshot testing** - Use inline snapshots to verify formatted output
+
+**Required test patterns:**
+- Unit tests in individual `{tool-name}.test.ts` files using Vitest and MSW mocks
+- Input/output validation with inline snapshots
+- Error case testing (API failures, invalid params)
+- Mock server setup in `packages/mcp-server-mocks`
+
+See `docs/testing.mdc` for detailed testing patterns and `docs/adding-tools.mdc` for the testing workflow.
+
+## Essential Commands
+
+```bash
+# Development
+pnpm dev # Start all dev servers
+pnpm build # Build all packages
+pnpm inspector # Test tools interactively
+
+# Testing
+pnpm test # Unit tests
+pnpm eval # Evaluation tests (needs OPENAI_API_KEY)
+
+# Deployment
+pnpm deploy # Deploy to Cloudflare
+```
+
+## Quick Start
+
+1. Install dependencies: `pnpm install`
+2. For local testing: `pnpm start:stdio --access-token=`
+3. For development: `pnpm dev`
+
+## Cursor-Specific Notes
+
+When using Cursor's MCP integration:
+- The server runs via stdio transport
+- Authentication uses access tokens (not OAuth)
+- Follow the patterns in existing code
+
+## Environment Variables
+
+See specific guides for required environment variables:
+- Cloudflare: `releases/cloudflare.mdc`
+- stdio: `releases/stdio.mdc`
+- Evaluation tests: `.env.example`
+- Local development: Use command-line args
+
+## Repository Structure
+
+```
+sentry-mcp/
+βββ packages/
+β βββ mcp-server/ # Main MCP server (tools)
+β β βββ src/
+β β β βββ tools/ # 19 individual tool modules + utils
+β β β βββ server.ts # MCP server configuration
+β β β βββ api-client/ # Sentry API client
+β β β βββ internal/ # Shared utilities
+β β βββ scripts/ # Build scripts (tool definitions generation)
+β βββ mcp-cloudflare/ # Cloudflare Worker chat application
+β β βββ src/
+β β β βββ client/ # React frontend
+β β β βββ server/ # Worker API routes
+β β βββ components.json # Shadcn/ui config
+β βββ mcp-server-evals/ # AI evaluation tests
+β βββ mcp-server-mocks/ # MSW mocks for testing
+β βββ mcp-server-tsconfig/ # Shared TypeScript configs
+β βββ mcp-test-client/ # MCP client for testing
+βββ docs/ # All documentation
+ βββ cloudflare/ # Web app docs
+ βββ llms/ # LLM-specific docs
+```
+
+## Core Components Impact Analysis
+
+When making changes, consider these component interactions:
+
+### MCP Server (`packages/mcp-server/`)
+- **Tools** (19 modules): Query, create, update Sentry resources
+- **API Client**: Sentry API integration layer
+- **Server**: MCP protocol handler and error formatting
+
+### Cloudflare Web App (`packages/mcp-cloudflare/`)
+- **Client**: React-based chat interface with UI components
+- **Server**: Worker API routes for search, auth, MCP communication
+- **Integration**: Uses MCP server for tool execution
+
+### Testing Infrastructure
+- **Unit Tests**: Co-located with each component
+- **Mocks**: Realistic API responses in `mcp-server-mocks/`
+- **Evaluations**: AI-driven integration tests in `mcp-server-evals/`
+- **Test Client**: Interactive MCP testing in `mcp-test-client/`
+
+### Build & Deployment
+- **Tool Definitions**: Auto-generated JSON schemas for client consumption
+- **TypeScript Config**: Shared configurations in `mcp-server-tsconfig/`
+- **Packaging**: Multiple package coordination
+
+## References
+
+- MCP Protocol: https://modelcontextprotocol.io
+- Sentry API: https://docs.sentry.io/api/
diff --git a/docs/error-handling.mdc b/docs/error-handling.mdc
new file mode 100644
index 000000000..f04eb0aba
--- /dev/null
+++ b/docs/error-handling.mdc
@@ -0,0 +1,327 @@
+# Error Handling in MCP Tools
+
+This document describes how errors are handled throughout the MCP server tool system, including both regular tools and embedded agent tools.
+
+## Error Types and Hierarchy
+
+### API Error Classes (from api-client/errors.ts)
+
+```
+ApiError (base class)
+ββ ApiClientError (4xx - user errors, NOT sent to Sentry)
+β ββ ApiPermissionError (403)
+β ββ ApiNotFoundError (404)
+β ββ ApiValidationError (400, 422)
+β ββ ApiAuthenticationError (401)
+β ββ ApiRateLimitError (429)
+ββ ApiServerError (5xx - system errors, SENT to Sentry)
+```
+
+**Key Method:**
+- `ApiClientError.toUserMessage()` - Returns `"API error (status): message"`
+ - For 404s with generic messages, adds: "Please verify that the organization, project, or resource ID is correct and that you have access to it."
+ - For 404s with specific messages, adds: "Please verify the parameters are correct."
+
+### Application Error Classes (from errors.ts)
+
+- `UserInputError` - User-facing error for validation failures
+ - Parameter validation failures
+ - Any user-correctable error
+- `ConfigurationError` - Missing/invalid configuration
+
+### Error Categories
+
+**User-Facing Errors (Should NOT create Sentry issues):**
+- All `ApiClientError` subclasses
+- `UserInputError`
+- `ConfigurationError`
+
+**System Errors (Should be captured by Sentry):**
+- `ApiServerError`
+- Network failures
+- Unexpected runtime errors
+
+## Critical Principles
+
+### 1. Let Errors Bubble Up Naturally
+**API errors should bubble up naturally to the appropriate handler.** The API client throws properly typed errors that are caught at the right level:
+- In MCP tools β Bubble up to MCP server wrapper β `formatErrorForUser`
+- In embedded agent tools β Caught by `agentTool` β Formatted for AI
+
+### 2. Typed Error Handling
+**The API client uses a factory pattern (`createApiError`) to create properly typed errors:**
+- 4xx β `ApiClientError` subclass (ApiPermissionError, ApiNotFoundError, etc.)
+- 5xx β `ApiServerError`
+- Each error type has specific behaviors and helper methods
+
+### 3. **SECURITY CRITICAL** - Trusted Error Messages Only
+**π¨ NEVER return untrusted error messages to AI agents - this creates prompt injection vulnerabilities.**
+
+In our system, we ONLY return trusted error messages from:
+- **Sentry API responses** (trusted - Sentry controls these messages)
+- **Our own validation errors** (`UserInputError` - we control the message content)
+- **Pre-formatted system messages** (hardcoded error templates we control)
+
+**Why this matters:**
+- AI agents receive error messages as part of their context
+- Malicious error messages could contain prompt injection attacks
+- Untrusted input could manipulate agent behavior or extract sensitive information
+
+**What we trust:**
+- Sentry's API error messages (via `ApiClientError.toUserMessage()`)
+- Our own `UserInputError` messages (application-controlled)
+- System-generated error templates with Event IDs
+
+**What we DON'T trust:**
+- User-provided input in error scenarios (never directly returned to agents)
+- Third-party API error messages (would need sanitization)
+- Database error messages (could contain sensitive schema information)
+
+## Logging vs Capturing
+
+### Key Principle
+- **UserInputError** β Log to `console.warn()` in wrapAgentToolExecute (for Sentry logging, not as exception)
+- **ApiClientError** β Log to `console.warn()` in wrapAgentToolExecute (for Sentry logging, not as exception)
+- **ApiServerError/System errors (5xx)** β Let bubble up to be captured with `captureException()`
+
+When using Cloudflare with Sentry's `consoleLoggingIntegration`:
+- `console.warn()` and `console.log()` β Recorded and sent to Sentry as logs
+- `console.error()` β Also recorded, but use `console.warn()` for expected errors
+- `captureException()` β Creates Sentry issue immediately
+
+## Error Handling Patterns
+
+### 1. Regular MCP Tools
+
+Tools exposed to MCP clients call the API directly and let errors bubble up naturally:
+
+```typescript
+// In tool handler - just call the API directly:
+const result = await apiService.someMethod({ organizationSlug });
+// No try/catch needed - errors bubble up to MCP server wrapper
+```
+
+**What happens:**
+- API client throws typed errors via `createApiError` factory:
+ - 4xx β `ApiClientError` subclass (ApiPermissionError, ApiNotFoundError, etc.)
+ - 5xx β `ApiServerError`
+- Errors bubble up naturally to MCP server wrapper
+- `formatErrorForUser` handles formatting:
+ - `ApiClientError` β "Input Error" message with `toUserMessage()`, NOT logged to Sentry
+ - `ApiServerError` β "Error" message with Event ID, logged to Sentry
+ - `UserInputError` β "Input Error" message, NOT logged to Sentry
+
+### 2. Embedded Agent Tools
+
+Tools used by AI agents within other tools use `agentTool()` which returns structured responses:
+
+```typescript
+import { agentTool } from "../../internal/agents/tools/utils";
+
+return agentTool({
+ description: "Tool description",
+ parameters: z.object({ ... }),
+ execute: async (params) => {
+ // Just call the API directly - no error handling needed
+ const data = await apiService.someMethod(params);
+ return formatResult(data);
+ }
+});
+```
+
+**What happens:**
+- API client throws `ApiClientError` or `ApiServerError`
+- `agentTool` catches ALL API errors and returns structured responses:
+ - **Success:** `{ result: }`
+ - **UserInputError:** `{ error: "Input Error: {message}. You may be able to resolve this by addressing the concern and trying again." }`
+ - **ApiClientError:** `{ error: "Input Error: API error (404): Project not found. Please verify the parameters are correct. You may be able to resolve this by addressing the concern and trying again." }`
+ - **ApiServerError:** `{ error: "Server Error (502): Bad Gateway. Event ID: abc123def456. This is a system error that cannot be resolved by retrying." }`
+- Other errors (unexpected) β Re-thrown to parent tool
+
+**Key Benefits:**
+- **Structured responses:** AI agents receive consistent `{error?, result?}` objects instead of thrown errors
+- **Better error handling:** Agents can check for `error` property and handle failures gracefully
+- **Retry logic:** Agents can analyze error messages and determine if retry is worthwhile
+- **Type safety:** Return types are preserved while error handling is abstracted
+
+### 3. Error Flow Examples
+
+#### Example 1: Permission Error in Embedded Agent Tool
+
+```
+1. User calls search_events tool
+2. search_events uses AI agent with datasetAttributesTool
+3. datasetAttributesTool calls fetchCustomAttributes()
+4. fetchCustomAttributes calls apiService.listTraceItemAttributes() directly
+5. API returns 403 "no multi-project access"
+6. API client creates ApiPermissionError via createApiError factory and throws
+7. fetchCustomAttributes lets it bubble up (no try/catch)
+8. agentTool catches ApiClientError (specifically ApiPermissionError)
+9. Logs to console.warn() for Sentry logging
+10. Returns structured response:
+ ```
+ {
+ error: "Input Error: API error (403): You do not have access to query across multiple projects. Please select a project for your query. You may be able to resolve this by addressing the concern and trying again."
+ }
+ ```
+11. AI agent receives the structured response and can check the error property
+12. AI agent analyzes error message and retries with a specific project
+```
+
+#### Example 2: Server Error
+
+```
+1. User calls get_issue_details tool
+2. Tool calls apiService.getIssue() directly (no withApiErrorHandling)
+3. API returns 502 Bad Gateway
+4. API client creates ApiServerError via createApiError factory and throws
+5. Error bubbles up naturally to MCP server wrapper
+6. formatErrorForUser handles ApiServerError, logs to Sentry with captureException
+7. User receives formatted error response with Event ID
+```
+
+## Best Practices
+
+### DO:
+- Call API methods directly and let errors bubble up naturally
+- Use `agentTool()` for embedded agent tools
+- Let typed errors (ApiClientError, ApiServerError) bubble up
+- Include helpful context in error messages
+- Rely on the error hierarchy for proper handling
+- Check for `error` property in agent tool responses
+- **SECURITY: Only return trusted error messages to AI agents**
+
+### DON'T:
+- Don't wrap API calls in try/catch unless adding value
+- Don't use `withApiErrorHandling` anymore (deprecated)
+- Don't use the old `wrapAgentToolExecute` function (use `agentTool` instead)
+- Don't use `logIssue()` for expected API errors (4xx)
+- Don't use `captureException()` for UserInputError or ApiClientError
+- Don't create Sentry issues for user-facing errors
+- **SECURITY: NEVER pass untrusted error messages to AI agents - risk of prompt injection**
+
+### Security Guidelines for Agent Error Messages:
+
+**β SAFE - These are trusted and can be returned to agents:**
+```typescript
+// Sentry API errors (controlled by Sentry)
+return { error: `Input Error: ${apiError.toUserMessage()}. You may be able to resolve this...` };
+
+// Our own validation errors (controlled by our code)
+throw new UserInputError("Invalid organization slug format");
+
+// System-generated templates (controlled by our code)
+return { error: `Server Error (${status}): ${message}. Event ID: ${eventId}...` };
+```
+
+**β UNSAFE - These could enable prompt injection:**
+```typescript
+// User input directly in error (NEVER do this)
+return { error: `Invalid input: ${userProvidedValue}` }; // π¨ DANGEROUS
+
+// Third-party API errors without validation (NEVER do this)
+return { error: externalApiResponse.error }; // π¨ DANGEROUS
+
+// Database errors (could leak schema info)
+return { error: sqlError.message }; // π¨ DANGEROUS
+```
+
+## Error Propagation Summary
+
+```
+API Call
+ β
+createApiError Factory
+ ββ 4xx β ApiClientError subclass (with toUserMessage())
+ ββ 5xx β ApiServerError
+ β
+ Thrown directly to tool (no withApiErrorHandling)
+ β
+ In Embedded Agent Tool?
+ ββ Yes β agentTool
+ β ββ UserInputError β Returns { error: "Input Error: ..." }
+ β ββ ApiClientError β Returns { error: "Input Error: ..." } with toUserMessage()
+ β ββ ApiServerError β Returns { error: "Server Error (5xx): ..." } + Event ID (logged to Sentry)
+ β ββ Other (unexpected) β Re-throw
+ β β
+ β AI agent receives structured {error?, result?} response
+ β β
+ β AI agent checks for error property and handles accordingly
+ ββ No β MCP Server Wrapper β formatErrorForUser
+ ββ UserInputError β "**Input Error**" formatted
+ ββ ApiClientError β "**Input Error**" with toUserMessage()
+ ββ ApiServerError β "**Error**" + Event ID (logged to Sentry)
+ ββ Other β Captured by Sentry
+
+```
+
+## Console Logging
+
+When using Cloudflare Workers with Sentry integration:
+- `console.error()` is captured as breadcrumbs (not as issues)
+- Use for debugging information that should be attached to real errors
+- Don't use for expected error conditions
+
+## Implementation Checklist
+
+### For Regular MCP Tools:
+
+1. **Call the API directly without wrappers:**
+ ```typescript
+ // Just call the API - errors bubble up naturally
+ const result = await apiService.someMethod({ organizationSlug });
+ ```
+
+2. **Let errors bubble up to the MCP server wrapper** - don't add try/catch unless you're adding value
+
+3. **The MCP server will automatically:**
+ - Format errors via `formatErrorForUser`
+ - Log ApiServerError to Sentry with captureException
+ - Return formatted error to MCP client
+
+### For Embedded Agent Tools:
+
+1. **Use `agentTool()` instead of the regular `tool()` function:**
+ ```typescript
+ return agentTool({
+ description: "Tool description",
+ parameters: z.object({ ... }),
+ execute: async (params) => {
+ // Your tool implementation - return the result directly
+ const data = await apiService.someMethod(params);
+ return formatResult(data);
+ }
+ });
+ ```
+
+2. **Inside the tool, call the API directly:**
+ ```typescript
+ // No error handling needed - agentTool handles it automatically
+ const data = await apiService.someMethod(params);
+ ```
+
+3. **The wrapper will automatically:**
+ - Return `{ result: }` on success
+ - Return `{ error: "formatted message" }` on failure
+ - Log UserInputError/ApiClientError to console.warn for Sentry logging
+ - Include Event IDs for ApiServerError in error messages
+
+### Error Message Formats:
+
+- **UserInputError to Agent:** `{ error: "Input Error: {message}. You may be able to resolve this by addressing the concern and trying again." }`
+- **ApiClientError to Agent:** `{ error: "Input Error: {toUserMessage()}. You may be able to resolve this by addressing the concern and trying again." }`
+- **ApiServerError to Agent:** `{ error: "Server Error (5xx): {message}. Event ID: {eventId}. This is a system error that cannot be resolved by retrying." }`
+- **ApiClientError to MCP User:** Formatted with "**Input Error**" header and toUserMessage()
+- **ApiServerError to MCP User:** Formatted with "**Error**" header + Event ID (logged to Sentry)
+
+## Testing Error Handling
+
+When testing tools, verify:
+1. 404 errors include helpful hints via toUserMessage():
+ - Generic messages get detailed help about checking org/project/resource IDs
+ - Specific messages get brief parameter verification hint
+2. 403 errors are returned to agents as formatted markdown
+3. 5xx errors are captured by Sentry with Event IDs
+4. Network errors bubble up appropriately
+5. UserInputErrors have clear, actionable messages
+6. ApiClientError in agent tools returns formatted markdown with "**Input Error**" header
diff --git a/docs/github-actions.mdc b/docs/github-actions.mdc
new file mode 100644
index 000000000..425fb8764
--- /dev/null
+++ b/docs/github-actions.mdc
@@ -0,0 +1,67 @@
+# GitHub Actions
+
+CI/CD workflows for the Sentry MCP project.
+
+## Workflows
+
+### test.yml
+Runs on all pushes to main and pull requests:
+- Build, lint, unit tests
+- Code coverage reporting
+
+### deploy.yml
+Runs after tests pass on main branch:
+- **Canary deployment**: Deploy to `sentry-mcp-canary` worker with isolated resources
+- **Smoke tests**: Test canary deployment
+- **Production deployment**: Deploy to `sentry-mcp` worker (only if canary tests pass)
+- **Production smoke tests**: Test production deployment
+- **Automatic rollback**: Rollback production if smoke tests fail
+
+### eval.yml
+Runs evaluation tests against the MCP server.
+
+## Required Secrets
+
+Repository secrets (no environment needed):
+
+- **`CLOUDFLARE_API_TOKEN`** - Cloudflare API token with Workers deployment permissions
+- **`CLOUDFLARE_ACCOUNT_ID`** - Your Cloudflare account ID
+- **`SENTRY_AUTH_TOKEN`** - For Sentry release tracking
+- **`SENTRY_CLIENT_SECRET`** - Sentry OAuth client secret
+- **`COOKIE_SECRET`** - Session cookie encryption secret
+- **`OPENAI_API_KEY`** - For AI-powered search features
+
+## Deployment Architecture
+
+### Workers
+- **`sentry-mcp`** - Production worker at `https://mcp.sentry.dev`
+- **`sentry-mcp-canary`** - Canary worker at `https://sentry-mcp-canary.getsentry.workers.dev`
+
+### Resource Isolation
+Canary and production use separate resources for complete isolation:
+
+| Resource | Production | Canary |
+|----------|------------|---------|
+| KV Namespace | `8dd5e9bafe1945298e2d5ca3b408a553` | `a3fe0d23b2d34416930e284362a88a3b` |
+| Rate Limiter IDs | `1001`, `1002` | `2001`, `2002` |
+| Wrangler Config | `wrangler.jsonc` | `wrangler.canary.jsonc` |
+
+### Deployment Flow
+1. **Build once** - Single build for both deployments
+2. **Deploy canary** - `wrangler deploy --config wrangler.canary.jsonc`
+3. **Wait 30s** - Allow propagation
+4. **Test canary** - Run smoke tests against canary worker
+5. **Deploy production** - `wrangler deploy` (only if canary tests pass)
+6. **Wait 30s** - Allow propagation
+7. **Test production** - Run smoke tests against production worker
+8. **Rollback** - `wrangler rollback` if production tests fail
+
+## Manual Deployment
+
+Trigger via GitHub Actions β Deploy to Cloudflare β "Run workflow"
+
+## Troubleshooting
+
+1. **Authentication failed** - Check `CLOUDFLARE_API_TOKEN` permissions
+2. **Build failures** - Review TypeScript/build logs
+3. **Smoke test failures** - Check worker logs in Cloudflare dashboard
\ No newline at end of file
diff --git a/docs/llms/README.md b/docs/llms/README.md
new file mode 100644
index 000000000..204429f90
--- /dev/null
+++ b/docs/llms/README.md
@@ -0,0 +1,29 @@
+# LLM-Specific Documentation
+
+This directory contains meta-documentation specifically for LLMs working with the Sentry MCP codebase.
+
+## Contents
+
+### documentation-style-guide.mdc
+Guidelines for writing effective documentation that LLMs can consume efficiently. Defines principles like assuming intelligence, being concise, and showing rather than telling.
+
+### document-scopes.mdc
+Defines the specific purpose, content requirements, and line count targets for each documentation file. Helps maintain focus and prevent scope creep.
+
+### documentation-todos.mdc
+Specific tasks for improving each document based on the style guide and scope definitions. Tracks the documentation refactoring effort.
+
+## Purpose
+
+These documents help ensure that:
+- Documentation remains concise and focused
+- LLMs get project-specific information, not general programming knowledge
+- Redundancy is minimized through proper cross-referencing
+- Each document has a clear, defined purpose
+
+## For Human Contributors
+
+While these documents are designed for LLM consumption, they also serve as excellent guidelines for human contributors who want to understand:
+- How to write documentation for this project
+- What belongs in each document
+- How to maintain consistency across docs
\ No newline at end of file
diff --git a/docs/llms/document-scopes.mdc b/docs/llms/document-scopes.mdc
new file mode 100644
index 000000000..6aea7bd49
--- /dev/null
+++ b/docs/llms/document-scopes.mdc
@@ -0,0 +1,223 @@
+# Document Scopes
+
+Defines the specific purpose and content for each documentation file.
+
+## Reference Style (MANDATORY)
+
+- Use @path for all local file references, repo-root relative (e.g., `@packages/mcp-server/src/server.ts`).
+- Refer to sections by name: `See "Error Handling" in @docs/common-patterns.mdc`.
+- Keep Markdown links only for external sites.
+
+## Core Documents
+
+### architecture.mdc
+**Purpose**: Explain system design and package interactions
+
+**Must Include**:
+- Package responsibilities and boundaries
+- Data flow between components
+- Key architectural decisions and trade-offs
+- How MCP concepts map to implementation
+
+**Must Exclude**:
+- Installation instructions
+- Implementation details (link to code instead)
+- General MCP protocol explanation
+
+### common-patterns.mdc
+**Purpose**: Reusable patterns used throughout the codebase
+
+**Must Include**:
+- Error handling patterns (UserInputError, ApiError)
+- Zod schema patterns and conventions
+- TypeScript type helpers
+- Response formatting patterns
+- Parameter validation patterns
+
+**Must Exclude**:
+- Tool/prompt/resource-specific patterns
+- External library documentation
+- One-off patterns used in single places
+
+### quality-checks.mdc
+**Purpose**: Required checks and commands for code quality
+
+**Must Include**:
+- Essential commands that must pass
+- When to run each check
+- What each check validates
+- Common failure fixes
+
+**Must Exclude**:
+- Tool installation instructions
+- Detailed explanations of what linting is
+- CI/CD configuration
+
+## Feature Implementation Guides
+
+### adding-tools.mdc
+**Purpose**: How to add new MCP tools
+
+**Must Include**:
+- Tool definition structure
+- Handler implementation pattern
+- Required tests (unit + eval)
+- LLM-friendly descriptions
+- References to existing tools
+
+**Must Exclude**:
+- What MCP tools are conceptually
+- Duplicate testing patterns (link to testing.mdc)
+- Full code examples (reference real implementations)
+
+## Technical Guides
+
+### testing.mdc
+**Purpose**: Testing strategies and patterns
+
+**Must Include**:
+- Unit test patterns with snapshots
+- Evaluation test setup
+- Mock patterns with MSW
+- When to update snapshots
+- Test file organization
+
+**Must Exclude**:
+- Vitest documentation
+- General testing philosophy
+- Duplicate mock examples
+
+### api-patterns.mdc
+**Purpose**: Sentry API client usage and mocking
+
+**Must Include**:
+- apiServiceFromContext pattern
+- Schema definitions with Zod
+- Mock handler patterns
+- Multi-region support
+- Error handling
+
+**Must Exclude**:
+- HTTP basics
+- Zod library documentation
+- Duplicate error patterns (link to common-patterns.mdc)
+
+## Operations Guides
+
+### releases/cloudflare.mdc
+**Purpose**: Cloudflare Workers release process
+
+**Must Include**:
+- Wrangler configuration
+- Environment variables
+- MCP handler setup
+- OAuth provider configuration
+- Deployment commands (manual and automated)
+- Version uploads and gradual rollouts
+- Monitoring and troubleshooting
+
+**Must Exclude**:
+- Cloudflare Workers concepts
+- General deployment best practices
+- npm package release process (see stdio.mdc)
+
+### releases/stdio.mdc
+**Purpose**: npm package release process
+
+**Must Include**:
+- Version management
+- npm publishing workflow
+- User installation instructions (Claude Desktop, Cursor)
+- Environment variable configuration
+- Testing releases locally
+- Beta releases
+
+**Must Exclude**:
+- Cloudflare deployment (see cloudflare.mdc)
+- General npm documentation
+- IDE-specific setup details
+
+### monitoring.mdc
+**Purpose**: Observability and instrumentation
+
+**Must Include**:
+- Sentry integration patterns
+- Telemetry setup
+- Error tracking
+- Performance monitoring
+- Tag conventions
+
+**Must Exclude**:
+- What observability is
+- Sentry product documentation
+- General monitoring concepts
+
+### security.mdc
+**Purpose**: Authentication and security patterns
+
+**Must Include**:
+- OAuth implementation
+- Token management
+- Multi-tenant security
+- CORS configuration
+- Security headers
+
+**Must Exclude**:
+- OAuth protocol explanation
+- General security best practices
+- Duplicate deployment content
+
+## Meta Documents
+
+### README.md
+**Purpose**: Documentation index and navigation
+
+**Must Include**:
+- Document listing with one-line descriptions
+- Quick reference for common tasks
+- Links to style guide and scopes
+
+**Must Exclude**:
+- Detailed explanations
+- Duplicate content from other docs
+- Installation instructions
+
+### cursor.mdc
+**Purpose**: Cursor IDE-specific configuration
+
+**Must Include**:
+- Cursor metadata header
+- Link to documentation
+- Critical quality checks
+- Cursor-specific settings
+
+**Must Exclude**:
+- All content duplicated in other docs
+- General coding guidelines
+- Project overview
+
+### AGENTS.md
+**Purpose**: Agent entry point (Claude Code, Cursor, etc.)
+
+**Must Include**:
+- Brief project description
+- Documentation directory reference
+- Critical quality checks
+- Agent-specific notes (tools, transports, auth defaults)
+
+**Must Exclude**:
+- Detailed architecture (link to architecture.mdc)
+- Development setup (link to relevant docs)
+- Integration instructions (keep minimal)
+
+## Optimization Strategy
+
+Focus on clarity and usefulness, not arbitrary line counts.
+
+Key improvements needed:
+- **security.mdc**: Too much OAuth theory β focus on implementation
+- **api-patterns.mdc**: Redundant examples β consolidate patterns
+- **releases/cloudflare.mdc**: Focus on MCP-specific config, not generic Cloudflare docs
+- **monitoring.mdc**: Verbose explanations β code examples
+
+The goal: Each document should be focused enough to be useful in a single context window while remaining comprehensive for its topic.
diff --git a/docs/llms/documentation-style-guide.mdc b/docs/llms/documentation-style-guide.mdc
new file mode 100644
index 000000000..ad1359a0b
--- /dev/null
+++ b/docs/llms/documentation-style-guide.mdc
@@ -0,0 +1,214 @@
+# Documentation Style Guide
+
+This guide defines how to write effective documentation for LLMs working with the Sentry MCP codebase.
+
+## Core Principles
+
+### 1. Assume Intelligence
+- LLMs understand programming concepts - don't explain basics
+- Focus on project-specific patterns and conventions
+- Skip obvious steps like "create a file" or "save your changes"
+
+### 2. Optimize for Context Windows
+- Keep documents focused on a single topic
+- Use code examples instead of verbose explanations
+- Every line should provide unique value
+- Split large topics across multiple focused docs
+
+### 3. Show, Don't Tell
+- Include minimal, focused code examples
+- Reference actual implementations: `See @packages/mcp-server/src/server.ts:45`
+- Use real patterns from the codebase
+
+## MDC Header Format
+
+### For Cursor IDE Rules
+```markdown
+---
+description: Brief description of what this document covers
+globs:
+alwaysApply: true
+---
+```
+
+The header is optional for most docs but required for `cursor.mdc` to function as a Cursor IDE rule file.
+
+## Document Structure
+
+### Required Sections
+
+```markdown
+# [Feature/Pattern Name]
+
+Brief one-line description of what this covers.
+
+## When to Use
+
+Bullet points describing specific scenarios.
+
+## Implementation Pattern
+
+```typescript
+// Minimal example showing the pattern
+const example = {
+ // Only include what's unique to this project
+};
+```
+
+## Key Conventions
+
+Project-specific rules that must be followed.
+
+## Common Patterns
+
+Link to reusable patterns: See "Error Handling" in @docs/common-patterns.mdc
+
+## References
+
+- Implementation: `@packages/mcp-server/src/[file].ts`
+- Tests: `@packages/mcp-server/src/[file].test.ts`
+- Examples in codebase: [specific function/tool names]
+```
+
+## What to Include
+
+### DO Include:
+- **Project-specific patterns** - How THIS codebase does things
+- **Architecture decisions** - Why things are structured this way
+- **Required conventions** - Must-follow rules for consistency
+- **Integration points** - How components interact
+- **Validation requirements** - What checks must pass
+
+### DON'T Include:
+- **General programming concepts** - How to write TypeScript
+- **Tool documentation** - How to use pnpm or Vitest
+- **Verbose examples** - Keep code samples minimal
+- **Redundant content** - Link to other docs instead
+- **Step-by-step tutorials** - LLMs don't need hand-holding
+
+## Code Examples
+
+### Good Example:
+```typescript
+// Tool parameter pattern used throughout the codebase
+export const ParamOrganizationSlug = z
+ .string()
+ .toLowerCase()
+ .trim()
+ .describe("The organization's slug. Find using `find_organizations()` tool.");
+```
+
+### Bad Example:
+```typescript
+// First, import the required libraries
+import { z } from "zod";
+
+// Define a schema for the organization slug parameter
+// This schema will validate that the input is a string
+// It will also convert to lowercase and trim whitespace
+export const ParamOrganizationSlug = z
+ .string() // Ensures the value is a string
+ .toLowerCase() // Converts to lowercase
+ .trim() // Removes whitespace
+ .describe("The organization's slug..."); // Adds description
+```
+
+## Cross-References
+
+### File References (MANDATORY):
+- Use @path syntax for local files: `@docs/common-patterns.mdc`
+- Always reference from repo root: `@packages/mcp-server/src/server.ts`
+- Do NOT use Markdown links for local files (avoid markdown `[text](./...)` patterns)
+- Prefer path-only mentions to help agents parse
+
+### Section References:
+- Refer to sections by name, not anchors: `See "Error Handling" in @docs/common-patterns.mdc`
+- If multiple sections share a name, include a short hint: `("Zod Patterns" in @docs/common-patterns.mdc)`
+
+### Code References:
+- Use concrete paths and identifiers: `@packages/mcp-server/src/tools/search-events/index.ts:buildQuery`
+- Optional line hints for humans: `server.ts:45-52` (agents may ignore)
+- Prefer real implementations over fabricated examples
+
+### External Links:
+- Keep standard Markdown links for external sites
+- Use concise link text; avoid link-only bullets
+
+## Language and Tone
+
+### Use Direct Language:
+- β "You might want to consider using..."
+- β "Use UserInputError for validation failures"
+
+### Be Specific:
+- β "Handle errors appropriately"
+- β "Throw UserInputError with a message explaining how to fix it"
+
+### Focus on Requirements:
+- β "It's a good practice to run tests"
+- β "Run `pnpm test` - all tests must pass"
+
+## Document Length Guidelines
+
+### Context Window Optimization:
+- Each document should be consumable in a single context
+- Length depends on complexity, not arbitrary limits
+- Verbose explanations β concise code examples
+- Complex topics β split into focused documents
+
+### Examples:
+- **Quality checks**: ~100 lines (simple commands)
+- **Adding a tool**: ~300 lines (includes examples)
+- **API patterns**: May be longer if examples are valuable
+- **Architecture**: Split into overview + detailed sections
+
+## Maintenance
+
+### When Updating Docs:
+1. Check for redundancy with other docs
+2. Update cross-references if needed
+3. Ensure examples still match codebase
+4. Keep line count under 400
+
+### Red Flags:
+- Verbose prose explaining what code could show
+- Repeated content β extract to common-patterns.mdc
+- No code references β add implementation examples
+- Generic programming advice β remove it
+- Multiple concepts in one doc β split by topic
+
+## Example: Refactoring a Verbose Section
+
+### Before:
+```markdown
+## Setting Up Your Development Environment
+
+First, make sure you have Node.js installed. You can download it from nodejs.org.
+Next, install pnpm globally using npm install -g pnpm. Then clone the repository
+using git clone. Navigate to the project directory and run pnpm install to install
+all dependencies. Make sure to create your .env file with the required variables.
+```
+
+### After:
+```markdown
+## Environment Setup
+
+Required: Node.js 20+, pnpm
+
+```bash
+pnpm install
+cp .env.example .env # Add your API keys
+```
+
+See "Development Setup" in @AGENTS.md for environment variables.
+```
+
+## Agent Readability Checklist
+
+- Uses @path for all local file references
+- Short, focused sections with concrete examples
+- Minimal prose; prefers code and commands
+- Clear preconditions and environment notes
+- Error handling and validation rules are explicit
+
+This style guide ensures documentation remains focused, valuable, and maintainable for LLM consumption.
diff --git a/docs/logging.mdc b/docs/logging.mdc
new file mode 100644
index 000000000..755994b4f
--- /dev/null
+++ b/docs/logging.mdc
@@ -0,0 +1,133 @@
+---
+description: Logging reference using LogTape and Sentry
+---
+# Logging Reference
+
+How logging works in the Sentry MCP server using LogTape and Sentry. For tracing, spans, or metrics see @docs/monitoring.mdc.
+
+## Overview
+
+We use [LogTape](https://logtape.org/) for structured logging with two sinks:
+- **Console sink**: JSON Lines format for log aggregation
+- **Sentry sink**: Sends logs to Sentry's Logs product using `Sentry.logger.*` API
+
+Log levels are controlled by:
+1. `LOG_LEVEL` environment variable (e.g., `"debug"`, `"info"`, `"warning"`, `"error"`)
+2. Falls back to `NODE_ENV`: `"debug"` in development, `"info"` in production
+
+**Important**: We use a custom LogTape sink that calls `Sentry.logger.*` methods to send logs to Sentry's Logs product. The `@logtape/sentry` package uses `captureException/captureMessage` which creates Issues instead of Logs.
+
+Implementation: @packages/mcp-server/src/telem/logging.ts
+
+## Using the Log Helpers
+
+### logInfo() - Routine telemetry
+```typescript
+import { logInfo } from "@sentry/mcp-server/telem/logging";
+
+logInfo("MCP server started", {
+ loggerScope: ["server", "lifecycle"],
+ extra: { port: 3000 }
+});
+```
+
+### logWarn() - Operational warnings
+```typescript
+import { logWarn } from "@sentry/mcp-server/telem/logging";
+
+logWarn("Rate limit approaching", {
+ loggerScope: ["api", "rate-limit"],
+ extra: { remaining: 10, limit: 100 }
+});
+```
+
+### logError() - Operational errors (no Sentry issue)
+```typescript
+import { logError } from "@sentry/mcp-server/telem/logging";
+
+logError(error, {
+ loggerScope: ["tools", "fetch-trace"],
+ extra: { traceId: "abc123" }
+});
+```
+
+### logIssue() - System errors (creates Sentry issue)
+```typescript
+import { logIssue } from "@sentry/mcp-server/telem/logging";
+
+const eventId = logIssue(error, {
+ loggerScope: ["oauth", "token-refresh"],
+ contexts: {
+ oauth: { client_id: "..." }
+ }
+});
+```
+
+## When to Use Each Helper
+
+**Use `logIssue()` for:**
+- System errors (5xx, network failures, unexpected exceptions)
+- Critical failures that need investigation
+- Creates a Sentry Issue + emits a log entry with Event ID
+
+**Use `logError()` for:**
+- Expected operational errors (trace fetch failed, encoding error)
+- Errors that are handled gracefully
+- Sends to Sentry Logs only (no Issue created)
+
+**Use `logWarn()` for:**
+- Rate limit approaching
+- Deprecated API usage
+- Configuration warnings
+
+**Use `logInfo()` for:**
+- Request lifecycle (connection established, tool invoked)
+- Configuration output
+- Routine telemetry
+
+**Skip logging:**
+- `UserInputError` - Expected validation failures
+- 4xx API responses - Client errors (except 429 rate limits)
+- See @docs/error-handling.mdc for complete rules
+
+## Log Options
+
+All helpers accept:
+
+```typescript
+interface LogOptions {
+ loggerScope?: string | readonly string[]; // e.g., ["cloudflare", "oauth"]
+ extra?: Record; // Additional context
+ contexts?: Record>; // Sentry contexts
+}
+```
+
+`logIssue()` also accepts `attachments` for files to attach to the Sentry event.
+
+## Configuration
+
+**Environment Variables:**
+- `LOG_LEVEL` - Override log level (`"debug"`, `"info"`, `"warning"`, `"error"`)
+- `NODE_ENV` - Determines default level (`"development"` = debug, else info)
+
+**Sinks:**
+- Console: JSON Lines format for log aggregation
+- Sentry: Sends logs to Sentry for monitoring (uses `Sentry.init` config)
+
+## HTTP Request Logging
+
+Cloudflare middleware logs all HTTP requests automatically:
+
+```typescript
+// Middleware in packages/mcp-cloudflare/src/server/logging.ts
+app.use(createRequestLogger(["cloudflare", "http"]));
+
+// Logs: {"level":"INFO","message":"GET /api/search","properties":{"status":200,"duration_ms":42}}
+```
+
+## References
+
+- Implementation: @packages/mcp-server/src/telem/logging.ts
+- Error handling patterns: @docs/error-handling.mdc
+- Monitoring and tracing: @docs/monitoring.mdc
+- LogTape docs: https://logtape.org/
diff --git a/docs/monitoring.mdc b/docs/monitoring.mdc
new file mode 100644
index 000000000..4b1434c6b
--- /dev/null
+++ b/docs/monitoring.mdc
@@ -0,0 +1,278 @@
+---
+description:
+globs:
+alwaysApply: false
+---
+# Monitoring
+
+Observability patterns using Sentry across the MCP server.
+
+## Architecture
+
+Different Sentry SDKs for different environments:
+- **Core server**: `@sentry/core` (platform-agnostic)
+- **Cloudflare Workers**: `@sentry/cloudflare`
+- **Node.js stdio**: `@sentry/node`
+- **React client**: `@sentry/react`
+
+## Core Server Instrumentation
+
+### Error Logging
+
+See `logIssue` in @packages/mcp-server/src/telem/logging.ts (documented in @docs/logging.mdc) for the canonical way to create an Issue and structured log entry.
+
+### Tracing Pattern
+
+```typescript
+export async function createTracedToolHandler(
+ name: T,
+ handler: ToolHandlerFunction
+): Promise<[T, ToolHandlerFunction]> {
+ return [
+ name,
+ async (context: ServerContext, params: ToolParams) => {
+ const attributes = {
+ "mcp.tool.name": name,
+ ...extractMcpParameters(params),
+ };
+
+ return await withActiveSpan(
+ `tools/call ${name}`,
+ attributes,
+ async () => handler(context, params)
+ );
+ },
+ ];
+}
+```
+
+### Span Management
+
+```typescript
+async function withActiveSpan(
+ name: string,
+ attributes: Record,
+ fn: () => Promise
+): Promise {
+ const activeSpan = getActiveSpan();
+ const span = activeSpan?.startSpan(name) ?? startInactiveSpan({ name });
+
+ span.setAttributes(attributes);
+
+ try {
+ return await fn();
+ } catch (error) {
+ span.setStatus({ code: 2, message: error.message });
+ throw error;
+ } finally {
+ span.end();
+ }
+}
+```
+
+## Cloudflare Workers Setup
+
+### Configuration
+
+```typescript
+// sentry.config.ts
+export default function getSentryConfig(env: Env, context: ExecutionContext) {
+ return {
+ dsn: env.VITE_SENTRY_DSN,
+ environment: env.VITE_SENTRY_ENVIRONMENT || "development",
+ context,
+ integrations: [
+ Sentry.rewriteFramesIntegration({ root: "/" }),
+ ],
+ beforeSend(event) {
+ // Redact sensitive data
+ if (event.request?.headers?.authorization) {
+ event.request.headers.authorization = "[REDACTED]";
+ }
+ return event;
+ },
+ };
+}
+```
+
+### Worker Instrumentation
+
+```typescript
+export default Sentry.withSentry(
+ (env) => getSentryConfig(env),
+ {
+ async fetch(request, env, ctx): Promise {
+ // Attach OAuth provider to request
+ request.OAUTH_PROVIDER = oAuthProvider.configure(/* ... */);
+ return app.fetch(request, env, ctx);
+ },
+ }
+);
+```
+
+## Node.js Stdio Setup
+
+```typescript
+// Init at startup
+import * as Sentry from "@sentry/node";
+
+Sentry.init({
+ dsn: process.env.SENTRY_DSN,
+ environment: process.env.NODE_ENV || "production",
+ integrations: [
+ Sentry.nodeProfilingIntegration(),
+ ],
+ tracesSampleRate: 0.1,
+ profilesSampleRate: 0.1,
+});
+
+// Handle uncaught errors
+process.on("uncaughtException", (error) => {
+ Sentry.captureException(error);
+ process.exit(1);
+});
+```
+
+## OpenTelemetry Semantic Conventions
+
+Sentry follows OpenTelemetry semantic conventions for consistent observability.
+
+### MCP Attributes (Model Context Protocol)
+Based on [OpenTelemetry MCP conventions](https://github.com/open-telemetry/semantic-conventions/blob/3097fb0af5b9492b0e3f55dc5f6c21a3dc2be8df/docs/registry/attributes/mcp.md) (currently in **draft form**):
+
+- `mcp.method.name` - The name of the request or notification method (e.g., "notifications/cancelled", "initialize", "notifications/initialized")
+- `mcp.prompt.name` - The name of the prompt or prompt template provided in the request or response (e.g., "analyze-code")
+- `mcp.request.argument.` - Additional arguments passed to the request within `params` object (e.g., "mcp.request.argument.location='Seattle, WA'", "mcp.request.argument.a='42'")
+- `mcp.request.id` - A unique identifier for the request (e.g., "42")
+- `mcp.resource.uri` - The value of the resource uri (e.g., "postgres://database/customers/schema", "file:///home/user/documents/report.pdf")
+- `mcp.session.id` - Identifies MCP session (e.g., "191c4850af6c49e08843a3f6c80e5046")
+- `mcp.tool.name` - The name of the tool provided in the request (e.g., "get-weather", "execute_command")
+
+### Custom MCP Attributes
+These are custom attributes - *not in draft spec* - we've added to enhance MCP observability:
+
+- `mcp.resource.name` - The name of the resource (e.g., "sentry-docs-platform", "sentry-query-syntax")
+- `mcp.transport` - The transport method used for MCP communication (values: "http", "sse", "stdio")
+
+### User Agent Tracking
+
+Following [OpenTelemetry semantic conventions for user agent](https://opentelemetry.io/docs/specs/semconv/attributes-registry/user-agent/):
+
+- `user_agent.original` - The original User-Agent header value from the client
+
+**Cloudflare Transport**: Captured from the initial SSE/WebSocket connection request headers and cached for the session
+
+### Network Attributes
+Based on [OpenTelemetry network conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/registry/attributes/network.md):
+
+- `network.transport` - Transport protocol used ("pipe" for stdio, "tcp" for SSE)
+
+### GenAI Attributes (Generative AI)
+Based on [OpenTelemetry GenAI conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/registry/attributes/gen-ai.md):
+
+- `gen_ai.system` - The AI system being used (e.g., "anthropic")
+- `gen_ai.request.model` - Name of the GenAI model
+- `gen_ai.request.max_tokens` - Maximum tokens to generate
+- `gen_ai.operation.name` - Type of operation (e.g., "chat")
+- `gen_ai.usage.input_tokens` - Number of tokens in input
+- `gen_ai.usage.output_tokens` - Number of tokens in response
+
+### Span Naming Pattern
+Follows the format: `{mcp.method.name} {target}` per OpenTelemetry MCP semantic conventions
+
+- Tools: `tools/call {tool_name}` (e.g., `tools/call find_issues`)
+- Prompts: `prompts/get {prompt_name}` (e.g., `prompts/get analyze-code`)
+- Resources: `resources/read {resource_uri}` (e.g., `resources/read https://github.com/...`)
+- Client: `mcp.client/{target}` (e.g., `mcp.client/agent`)
+- Connect: `mcp.connect/{transport}` (e.g., `mcp.connect/stdio`)
+- Auth: `mcp.auth/{method}` (e.g., `mcp.auth/oauth`)
+
+**Note**: Span names are flexible and can be adjusted based on your needs. However, attribute names MUST follow the OpenTelemetry semantic conventions exactly as specified above.
+
+### Example Attributes
+```typescript
+// MCP Tool Execution
+{
+ "mcp.tool.name": "find_issues",
+ "mcp.session.id": "191c4850af6c49e08843a3f6c80e5046"
+}
+
+// GenAI Agent
+{
+ "gen_ai.system": "anthropic",
+ "gen_ai.request.model": "claude-3-5-sonnet-20241022",
+ "gen_ai.operation.name": "chat",
+ "gen_ai.usage.input_tokens": 150,
+ "gen_ai.usage.output_tokens": 2048
+}
+
+// Connection
+{
+ "network.transport": "pipe", // "pipe" for stdio, "tcp" for SSE
+ "mcp.session.id": "191c4850af6c49e08843a3f6c80e5046",
+ "mcp.transport": "stdio", // Custom attribute: "stdio" or "sse"
+ "service.version": "1.2.3" // Version of the MCP server/client
+}
+
+// MCP Client
+{
+ "mcp.session.id": "191c4850af6c49e08843a3f6c80e5046",
+ "network.transport": "pipe", // "pipe" for stdio, "tcp" for SSE
+ "mcp.transport": "stdio", // Custom attribute: "stdio" or "sse"
+ "gen_ai.system": "anthropic",
+ "gen_ai.request.model": "claude-3-5-sonnet-20241022",
+ "gen_ai.operation.name": "chat",
+ "service.version": "1.2.3" // Version of the MCP client
+}
+```
+
+## Error Classification
+
+### Skip Logging For:
+- `UserInputError` - Expected user errors
+- 4xx API responses (except 429)
+- Validation errors
+
+### Always Log:
+- 5xx errors
+- Network failures
+- Unexpected exceptions
+- Rate limit errors (429)
+
+## Performance Monitoring
+
+### Traces Configuration
+```typescript
+{
+ tracesSampleRate: 0.1, // 10% in production
+ profilesSampleRate: 0.1, // 10% of traces
+}
+```
+
+## Environment Variables
+
+### Required for Monitoring
+```bash
+# Cloudflare (build-time)
+VITE_SENTRY_DSN=https://...@sentry.io/...
+VITE_SENTRY_ENVIRONMENT=production
+
+# Node.js (runtime)
+SENTRY_DSN=https://...@sentry.io/...
+NODE_ENV=production
+```
+
+## Best Practices
+
+1. **Context is King**: Always include relevant context
+2. **Redact Secrets**: Never log tokens or sensitive data
+3. **Sample Wisely**: Use appropriate sampling rates
+4. **Tag Everything**: Use consistent tags for filtering
+5. **Skip Expected Errors**: Don't pollute with user errors
+
+## References
+
+- Core logging: `packages/mcp-server/src/logging.ts`
+- Worker config: `packages/mcp-cloudflare/src/server/sentry.config.ts`
+- Tracing helpers: `packages/mcp-server/src/tracing.ts`
+- Sentry docs: https://docs.sentry.io/
diff --git a/docs/pr-management.mdc b/docs/pr-management.mdc
new file mode 100644
index 000000000..3121f9fe5
--- /dev/null
+++ b/docs/pr-management.mdc
@@ -0,0 +1,434 @@
+# Pull Request Management
+
+Comprehensive guide for managing pull requests in the Sentry MCP project, including GitHub CLI usage, review feedback handling, and structured commit practices.
+
+## GitHub CLI Usage
+
+### Installation and Setup
+
+```bash
+# Install GitHub CLI
+brew install gh # macOS
+# or follow instructions at https://cli.github.com/
+
+# Authenticate
+gh auth login
+
+# Verify installation
+gh auth status
+```
+
+### Creating Pull Requests
+
+```bash
+# Create PR with title and description
+gh pr create --title "feat: add new search functionality" --body "$(cat <<'EOF'
+## Summary
+- Add natural language search for events
+- Implement AI-powered query translation
+
+## Changes
+- Added search_events tool with AI integration
+- Implemented dataset-specific formatting
+- Added comprehensive test coverage
+EOF
+)"
+
+# Create draft PR
+gh pr create --draft --title "WIP: experimental feature"
+
+# Create PR with specific base branch
+gh pr create --base main --head feat-better-search
+```
+
+### Managing Pull Requests
+
+```bash
+# View PR status
+gh pr status
+
+# List PRs
+gh pr list
+gh pr list --state open --author @me
+
+# View specific PR
+gh pr view 123
+gh pr view https://github.com/getsentry/sentry-mcp/pull/123
+
+# Check CI status
+gh pr checks 123
+
+# Merge PR (when ready)
+gh pr merge 123 --squash --delete-branch
+```
+
+### Reviewing Feedback
+
+```bash
+# View PR comments and reviews
+gh pr view 123 --comments
+
+# View specific review
+gh api repos/getsentry/sentry-mcp/pulls/123/reviews
+
+# List review comments on specific files
+gh api repos/getsentry/sentry-mcp/pulls/123/comments
+```
+
+## Handling Review Feedback
+
+### Sources of Feedback
+
+1. **Human reviewers** - Team members providing code review
+2. **AI agents** (e.g., Cursor, other Claude instances) - Automated suggestions
+3. **CI/CD systems** - Build failures, test failures, linting issues
+4. **GitHub bots** - Automated security, dependency, or quality checks
+
+### Validation Process
+
+**CRITICAL**: Always verify feedback validity before implementing changes.
+
+#### For Human Review Feedback
+- β **Always implement** - Human reviewers understand context and requirements
+- β **Ask for clarification** if feedback is unclear
+- β **Discuss trade-offs** if you disagree with approach
+
+#### For AI Agent Feedback
+- β οΈ **Verify accuracy** - AI suggestions may be outdated or context-unaware
+- β οΈ **Check compatibility** - Ensure suggestions align with project patterns
+- β οΈ **Test thoroughly** - AI changes can introduce subtle bugs
+
+**Common AI feedback to validate carefully:**
+```bash
+# AI suggests error handling - verify it's needed
+- "Add error handling for JSON.parse"
+ β Check if error handling exists elsewhere in call chain
+
+# AI suggests performance optimizations - verify impact
+- "Use useMemo for expensive calculations"
+ β Measure if optimization is actually needed
+
+# AI suggests refactoring - verify consistency
+- "Extract this into a separate function"
+ β Check if it follows existing code patterns
+```
+
+#### For CI/CD Feedback
+- β **Fix immediately** - Build/test failures block progress
+- β **Address linting** - Maintains code quality standards
+- β **Resolve conflicts** - Required for merge
+
+### Response Workflow
+
+```bash
+# 1. Fetch latest comments
+gh pr view --comments
+
+# 2. Address each piece of feedback
+git checkout feat-better-search
+# Make changes...
+
+# 3. Commit with reference to feedback
+git commit -m "fix: address review feedback about error handling
+
+Per @reviewer suggestion, add proper error boundaries around
+JSON.parse operations in search-events.ts.
+
+Co-Authored-By: Codex CLI Agent "
+
+# 4. Push and notify
+git push
+gh pr comment --body "β Addressed all review feedback"
+```
+
+## Commit Message Structure
+
+### Format Standard
+
+```
+():
+
+[optional body]
+
+[optional footer]
+```
+
+### Types
+- `feat`: New feature
+- `fix`: Bug fix
+- `refactor`: Code change that neither fixes a bug nor adds a feature
+- `perf`: Performance improvement
+- `test`: Adding or modifying tests
+- `docs`: Documentation changes
+- `style`: Code style changes (formatting, missing semicolons, etc.)
+- `chore`: Changes to build process or auxiliary tools
+
+### Scope (optional)
+- `server`: MCP server changes
+- `client`: Test client changes
+- `cloudflare`: Cloudflare Worker changes
+- `evals`: Evaluation test changes
+- `tools`: Tool-specific changes
+- `api`: API client changes
+
+### Examples
+
+```bash
+# Feature addition
+git commit -m "feat(tools): add natural language search for events
+
+Implement AI-powered query translation using OpenAI GPT-4 to convert
+natural language queries into Sentry search syntax. Supports multiple
+datasets: errors, logs, and spans.
+
+Co-Authored-By: Codex CLI Agent "
+
+# Bug fix
+git commit -m "fix(evals): update search-events eval to use available exports
+
+Replace missing TaskRunner and Factuality imports with NoOpTaskRunner
+and ToolPredictionScorer to resolve CI build failures after factuality
+checker removal.
+
+Co-Authored-By: Codex CLI Agent "
+
+# Refactoring
+git commit -m "refactor: move tool test to appropriate directory
+
+Move toolDefinitions.test.ts to tools/ directory and rename to tools.test.ts
+to fix circular dependency and improve organization.
+
+Co-Authored-By: Codex CLI Agent "
+```
+
+### AI Attribution
+
+- When assisted by AI, include only a Co-Authored-By footer naming the agent, for example:
+ - `Co-Authored-By: Codex CLI Agent `
+ - `Co-Authored-By: Claude Code `
+
+Do not include generator or "Generated with" banners in PR descriptions or commits.
+
+## PR Description Structure
+
+### Focus on Reviewer Needs
+
+**Good PR descriptions help reviewers understand:**
+- What problem was solved
+- What changes were made
+- Why these changes were necessary
+- Any potential impact or risks
+
+**Avoid including:**
+- Test plans or instructions (CI handles testing)
+- Implementation details that are clear from the code
+- Line-by-line walkthroughs
+- Verbose explanations better suited for documentation
+
+### Template
+
+```markdown
+## Summary
+
+Fixes [issue] by [solution approach]. This addresses [problem] and enables [benefit].
+
+### Key Changes
+
+- Fixed [specific issue]: [brief explanation]
+- Added [new feature]: [brief explanation]
+- Refactored [component]: [brief explanation]
+
+### Breaking Changes
+
+- None
+
+- Updated tool interface (see migration guide)
+
+### Dependencies
+
+- Depends on #123
+- Requires Sentry API version X.Y
+
+
+```
+
+### Real Examples
+
+**Good - Concise and reviewer-focused:**
+```markdown
+## Summary
+Fixes search_events tool to properly understand OpenTelemetry semantic conventions and refactors the code into a clean module structure.
+
+### Key Changes
+- Fixed semantic understanding: "agent calls" now correctly maps to GenAI conventions (`gen_ai.*`) instead of MCP tool calls
+- Refactored 1385-line monolithic file into 8 focused modules with clear responsibilities
+- Added dynamic semantic lookup for better attribute disambiguation
+
+
+```
+
+**Bad - Too verbose with unnecessary details:**
+```markdown
+## Summary
+This PR implements comprehensive improvements to the search_events tool...
+
+### Technical Implementation Details
+- Uses OpenAI GPT-4 for natural language processing
+- Implements sophisticated caching mechanisms
+- Creates extensive test coverage with MSW mocks
+- Follows advanced TypeScript patterns
+
+### Test Plan
+1. Run `pnpm test` to verify all tests pass
+2. Test with various query types:
+ - "agent calls" should return GenAI spans
+ - "database errors" should return DB-related errors
+3. Verify the UI displays results correctly
+4. Check that performance remains optimal
+
+### File Structure Changes
+- src/tools/search-events.ts β src/tools/search-events/handler.ts
+- Added src/tools/search-events/agent.ts for AI logic
+- [... detailed file-by-file breakdown ...]
+```
+
+## Review Process Workflow
+
+### 1. Pre-Review Checklist
+
+```bash
+# Run all quality checks before requesting review
+pnpm run tsc # TypeScript compilation
+pnpm run lint # Code linting
+pnpm run test # Unit tests
+
+# Check git status is clean
+git status
+
+# Verify CI passes
+gh pr checks
+```
+
+### 2. Requesting Review
+
+```bash
+# Add specific reviewers
+gh pr edit --add-reviewer @username1,@username2
+
+# Add to review by team
+gh pr edit --add-reviewer @getsentry/mcp-team
+
+# Mark as ready for review (if draft)
+gh pr ready
+```
+
+### 3. Responding to Reviews
+
+```bash
+# View all feedback
+gh pr view --comments
+
+# Address feedback in commits
+git add .
+git commit -m "fix: address review feedback about validation
+
+Add input validation for natural language queries per @reviewer
+suggestion. Ensures minimum length and sanitizes special characters.
+
+Co-Authored-By: Claude "
+
+# Respond to comments
+gh pr comment --body "Thanks for the feedback! Fixed the validation issue in latest commit."
+
+# Re-request review after changes
+gh pr edit --add-reviewer @username
+```
+
+### 4. Final Steps
+
+```bash
+# Ensure all checks pass
+gh pr checks
+
+# Merge when approved
+gh pr merge --squash --delete-branch
+
+# Or merge with detailed commit message
+gh pr merge --squash --delete-branch --body "$(cat <<'EOF'
+feat: add natural language search for Sentry events
+
+Complete implementation of AI-powered search tool that translates
+natural language queries into Sentry search syntax. Supports errors,
+logs, and spans datasets with comprehensive field selection.
+
+Co-Authored-By: Claude
+EOF
+)"
+```
+
+## Troubleshooting Common Issues
+
+### CI Failures
+
+```bash
+# Check specific failure
+gh pr checks --watch
+
+# View detailed logs
+gh run view --log
+
+# Common fixes:
+# - TypeScript errors: Fix type issues locally
+# - Test failures: Debug with `pnpm test -- --reporter=verbose`
+# - Lint errors: Run `pnpm lint --fix`
+```
+
+### Merge Conflicts
+
+```bash
+# Update branch with latest main
+git checkout feat-better-search
+git fetch origin
+git rebase origin/main
+
+# Resolve conflicts manually, then:
+git add .
+git rebase --continue
+git push --force-with-lease
+```
+
+### Review Comments Not Resolving
+
+```bash
+# View unresolved comments
+gh pr view --comments | grep -A5 -B5 "REQUESTED_CHANGES"
+
+# Ensure you've addressed all feedback:
+# 1. Made code changes
+# 2. Committed changes
+# 3. Pushed to branch
+# 4. Responded to comments
+```
+
+## Best Practices
+
+### Do's
+- β Always verify AI feedback before implementing
+- β Write clear, descriptive commit messages
+- β Focus PR descriptions on reviewer needs, not test plans
+- β Respond promptly to review feedback
+- β Run quality checks before pushing
+- β Use GitHub CLI for efficient workflow
+
+### Don'ts
+- β Never auto-apply AI suggestions without validation
+- β Don't commit without running tests locally
+- β Don't merge without reviewer approval
+- β Don't ignore CI failures
+- β Don't force push without `--force-with-lease`
+
+### AI Agent Collaboration
+- π€ AI agents may suggest improvements via comments
+- π€ Always evaluate AI suggestions for correctness and fit
+- π€ Test AI-suggested changes thoroughly
+- π€ Include AI feedback validation in your review process
diff --git a/docs/quality-checks.mdc b/docs/quality-checks.mdc
new file mode 100644
index 000000000..b7ce0943b
--- /dev/null
+++ b/docs/quality-checks.mdc
@@ -0,0 +1,78 @@
+# Quality Checks
+
+Required quality checks that MUST pass before completing any code changes.
+
+## Critical Quality Checks
+
+**After ANY code changes, you MUST run:**
+
+```bash
+pnpm -w run lint:fix # Fix linting issues
+pnpm tsc --noEmit # Check TypeScript types
+pnpm test # Run all tests
+```
+
+**DO NOT proceed if any check fails.**
+
+## Tool Testing Requirements
+
+**ALL tools MUST have comprehensive tests that verify:**
+
+- **Input validation** - Required/optional parameters, type checking, edge cases
+- **Output formatting** - Markdown structure, content accuracy, error messages
+- **API integration** - Mock server responses, error handling, parameter passing
+- **Snapshot testing** - Use inline snapshots to verify formatted output
+
+**Required test patterns:**
+- Unit tests in individual `{tool-name}.test.ts` files using Vitest and MSW mocks
+- Input/output validation with inline snapshots
+- Error case testing (API failures, invalid params)
+- Mock server setup in `packages/mcp-server-mocks`
+
+See `docs/testing.mdc` for detailed testing patterns and `docs/adding-tools.mdc` for the testing workflow.
+
+## Tool Count Limits
+
+**IMPORTANT**: AI agents have a hard cap of 45 total tools. Sentry MCP must:
+- Target ~20 tools (current best practice)
+- Never exceed 25 tools (absolute maximum)
+- This limit exists in Cursor and possibly other tools
+
+**Current status**: 19 tools (within target range)
+
+## Build Verification
+
+Ensure the build process works correctly:
+
+```bash
+npm run build # Build all packages
+npm run generate-tool-definitions # Generate tool definitions
+```
+
+Tool definitions must generate without errors for client consumption.
+
+## Code Quality Standards
+
+- **TypeScript strict mode** - All code must compile without errors
+- **Linting compliance** - Follow established code style patterns
+- **Test coverage** - All new tools must have comprehensive tests
+- **Error handling** - Use patterns from `common-patterns.mdc#error-handling`
+- **API patterns** - Follow patterns from `api-patterns.mdc`
+
+## Pre-Commit Checklist
+
+Before completing any task:
+
+- [ ] All quality checks pass (`pnpm -w run lint:fix`, `pnpm tsc --noEmit`, `pnpm test`)
+- [ ] Tool count within limits (β€20 target, β€25 absolute max)
+- [ ] New tools have comprehensive tests
+- [ ] Build process generates tool definitions successfully
+- [ ] Documentation updated if patterns changed
+- [ ] AGENTS.md β cursor.mdc sync maintained (if applicable)
+
+## References
+
+- Testing patterns: `testing.mdc`
+- Tool development: `adding-tools.mdc`
+- Code patterns: `common-patterns.mdc`
+- API usage: `api-patterns.mdc`
diff --git a/docs/releases/cloudflare.mdc b/docs/releases/cloudflare.mdc
new file mode 100644
index 000000000..43d4a19d7
--- /dev/null
+++ b/docs/releases/cloudflare.mdc
@@ -0,0 +1,260 @@
+# Cloudflare Release
+
+Cloudflare Workers deployment configuration and release process.
+
+## Architecture Overview
+
+The deployment consists of:
+- **Worker**: Stateless HTTP server with OAuth flow and MCP handler
+- **KV Storage**: OAuth token storage
+- **Static Assets**: React UI for setup instructions
+
+## Wrangler Configuration
+
+### wrangler.jsonc
+
+```jsonc
+{
+ "name": "sentry-mcp-oauth",
+ "main": "./src/server/index.ts",
+ "compatibility_date": "2025-03-21",
+ "compatibility_flags": [
+ "nodejs_compat",
+ "nodejs_compat_populate_process_env"
+ ],
+ "keep_vars": true,
+
+ // Bindings
+ "kv_namespaces": [{
+ "binding": "OAUTH_KV",
+ "id": "your-kv-namespace-id"
+ }],
+
+ // SPA configuration
+ "site": {
+ "bucket": "./dist/client"
+ }
+}
+```
+
+### Environment Variables
+
+Required in production:
+```bash
+SENTRY_CLIENT_ID=your_oauth_app_id
+SENTRY_CLIENT_SECRET=your_oauth_app_secret
+COOKIE_SECRET=32_char_random_string
+```
+
+Optional overrides for self-hosted deployments:
+```bash
+# Leave unset to target the SaaS host
+SENTRY_HOST=sentry.example.com # Hostname only (self-hosted only)
+```
+
+Configure these overrides only when your Cloudflare deployment connects to a
+self-hosted Sentry instance; no additional host variables are required for the
+SaaS service.
+
+Development (.dev.vars):
+```bash
+SENTRY_CLIENT_ID=dev_client_id
+SENTRY_CLIENT_SECRET=dev_secret
+COOKIE_SECRET=dev-cookie-secret
+```
+
+## MCP Handler Setup
+
+The MCP handler uses a stateless architecture with closure-captured context:
+
+```typescript
+import { experimental_createMcpHandler as createMcpHandler } from "agents/mcp";
+import { buildServer } from "@sentry/mcp-server/server";
+
+const mcpHandler: ExportedHandler = {
+ async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise {
+ // Extract auth props from ExecutionContext (set by OAuth provider)
+ const oauthCtx = ctx as OAuthExecutionContext;
+
+ // Build complete ServerContext from OAuth props + constraints
+ const serverContext: ServerContext = {
+ userId: oauthCtx.props.userId,
+ clientId: oauthCtx.props.clientId,
+ accessToken: oauthCtx.props.accessToken,
+ grantedScopes: expandedScopes,
+ constraints: verification.constraints,
+ sentryHost,
+ mcpUrl: oauthCtx.props.mcpUrl,
+ };
+
+ // Build server with context - context is captured in tool handler closures
+ const server = buildServer({ context: serverContext });
+
+ // Run MCP handler - context already available via closures
+ return createMcpHandler(server, { route: "/mcp" })(request, env, ctx);
+ },
+};
+```
+
+## OAuth Provider Setup
+
+Configure the OAuth provider with required scopes:
+
+```typescript
+const oAuthProvider = new OAuthProvider({
+ clientId: env.SENTRY_CLIENT_ID,
+ clientSecret: env.SENTRY_CLIENT_SECRET,
+ oauthUrl: `https://${env.SENTRY_HOST}/api/0/authorize/`,
+ tokenUrl: `https://${env.SENTRY_HOST}/api/0/token/`,
+ redirectUrl: `${new URL(request.url).origin}/auth/sentry/callback`,
+ scope: ["org:read", "project:read", "issue:read", "issue:write"]
+});
+```
+
+## Deployment Commands
+
+### Local Development
+
+```bash
+# Install dependencies
+pnpm install
+
+# Start dev server
+pnpm dev
+
+# Access at http://localhost:8787
+```
+
+### Production Deployment
+
+#### Automated via GitHub Actions (Recommended)
+
+Production deployments happen automatically when changes are pushed to the main branch:
+
+1. Push to main or merge a PR
+2. GitHub Actions runs tests
+3. If tests pass, deploys to Cloudflare
+
+Required secrets in GitHub repository settings:
+- `CLOUDFLARE_API_TOKEN` - API token with Workers deployment permissions
+- `CLOUDFLARE_ACCOUNT_ID` - Your Cloudflare account ID
+
+See `github-actions.mdc` for detailed setup instructions.
+
+#### Manual Deployment
+
+```bash
+# Build client assets
+pnpm build
+
+# Deploy to Cloudflare
+pnpm deploy
+
+# Or deploy specific environment
+pnpm deploy --env production
+```
+
+#### Version Uploads (Gradual Rollouts)
+
+For feature branches, GitHub Actions automatically uploads new versions without deploying:
+
+1. Push to any branch (except main)
+2. Tests run automatically
+3. If tests pass, version is uploaded to Cloudflare
+4. Use Cloudflare dashboard to gradually roll out the version
+
+Manual version upload:
+```bash
+pnpm cf:versions:upload
+```
+
+### Creating Resources
+
+First-time setup:
+```bash
+# Create KV namespace for OAuth token storage
+npx wrangler kv:namespace create OAUTH_KV
+
+# Update wrangler.jsonc with the namespace ID
+```
+
+## Multi-Region Considerations
+
+Cloudflare Workers run globally, but consider:
+- KV is eventually consistent globally
+- Workers are stateless and edge-deployed
+- Use regional hints for performance
+
+## Security Configuration
+
+### CORS Settings
+
+```typescript
+const ALLOWED_ORIGINS = [
+ "https://sentry.io",
+ "https://*.sentry.io"
+];
+
+// Apply to responses
+response.headers.set("Access-Control-Allow-Origin", origin);
+response.headers.set("Access-Control-Allow-Credentials", "true");
+```
+
+### Cookie Configuration
+
+```typescript
+// Secure cookie settings
+"HttpOnly; Secure; SameSite=Lax; Max-Age=2592000"
+```
+
+## Monitoring
+
+### Sentry Integration
+
+```typescript
+// sentry.config.ts
+export default {
+ dsn: env.VITE_SENTRY_DSN,
+ environment: env.VITE_SENTRY_ENVIRONMENT || "development",
+ integrations: [
+ Sentry.rewriteFramesIntegration({
+ root: "/",
+ }),
+ ],
+ transportOptions: {
+ sendClientReports: false,
+ },
+};
+```
+
+### Worker Analytics
+
+Monitor via Cloudflare dashboard:
+- Request rates
+- Error rates
+- CPU time and memory usage
+- KV operations
+
+## Troubleshooting
+
+### Common Issues
+
+1. **OAuth redirect mismatch**
+ - Ensure callback URL matches Sentry app config
+ - Check protocol (http vs https)
+
+2. **Context not available in tool handlers**
+ - Verify buildServer() is called with context before createMcpHandler()
+ - Check ExecutionContext.props contains OAuth data
+ - Ensure context is passed correctly during server build
+
+3. **Environment variables missing**
+ - Use `wrangler secret put` for production
+ - Check `.dev.vars` for local development
+
+## References
+
+- Worker code: `packages/mcp-cloudflare/src/server/`
+- Client UI: `packages/mcp-cloudflare/src/client/`
+- Wrangler config: `packages/mcp-cloudflare/wrangler.jsonc`
+- Cloudflare docs: https://developers.cloudflare.com/workers/
diff --git a/docs/releases/stdio.mdc b/docs/releases/stdio.mdc
new file mode 100644
index 000000000..21184082d
--- /dev/null
+++ b/docs/releases/stdio.mdc
@@ -0,0 +1,202 @@
+# stdio Release
+
+npm package release process for the MCP server stdio transport.
+
+## Overview
+
+The MCP server is published to npm as `@sentry/mcp-server` for use with:
+- Claude Desktop
+- Cursor IDE
+- VS Code with MCP extension
+- Other MCP clients supporting stdio transport
+
+## Package Structure
+
+Published package includes:
+- Compiled TypeScript (`dist/`)
+- stdio transport implementation
+- Type definitions
+- Tool definitions
+
+## Release Process
+
+### 1. Version Bump
+
+Update version in `packages/mcp-server/package.json`:
+
+```json
+{
+ "name": "@sentry/mcp-server",
+ "version": "1.2.3"
+}
+```
+
+Follow semantic versioning:
+- **Major**: Breaking changes to tool interfaces
+- **Minor**: New tools or non-breaking features
+- **Patch**: Bug fixes
+
+### 2. Update Changelog
+
+Document changes in `CHANGELOG.md`:
+
+```markdown
+## [1.2.3] - 2025-01-16
+
+### Added
+- New `search_docs` tool for documentation search
+
+### Fixed
+- Fix context propagation in tool handlers
+```
+
+### 3. Quality Checks
+
+**MANDATORY before publishing:**
+
+```bash
+pnpm -w run lint:fix # Fix linting issues
+pnpm tsc --noEmit # TypeScript type checking
+pnpm test # Run all tests
+pnpm run build # Ensure clean build
+```
+
+All checks must pass.
+
+### 4. Publish to npm
+
+```bash
+cd packages/mcp-server
+
+# Dry run to verify package contents
+npm publish --dry-run
+
+# Publish to npm
+npm publish
+```
+
+### 5. Tag Release
+
+```bash
+git tag v1.2.3
+git push origin v1.2.3
+```
+
+## User Installation
+
+Users install via npx in their MCP client configuration:
+
+### Claude Desktop
+
+```json
+{
+ "mcpServers": {
+ "sentry": {
+ "command": "npx",
+ "args": ["-y", "@sentry/mcp-server"],
+ "env": {
+ "SENTRY_ACCESS_TOKEN": "sntrys_...",
+ "SENTRY_HOST": "sentry.io"
+ }
+ }
+ }
+}
+```
+
+Config location:
+- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
+
+### Cursor IDE
+
+Add to `.cursor/mcp.json`:
+
+```json
+{
+ "mcpServers": {
+ "sentry": {
+ "command": "npx",
+ "args": ["-y", "@sentry/mcp-server"],
+ "env": {
+ "SENTRY_ACCESS_TOKEN": "sntrys_...",
+ "SENTRY_HOST": "sentry.io"
+ }
+ }
+ }
+}
+```
+
+## Environment Variables
+
+Required:
+- `SENTRY_ACCESS_TOKEN` - Sentry API access token
+- `SENTRY_HOST` - Sentry instance hostname (default: `sentry.io`)
+
+Optional:
+- `SENTRY_ORG` - Default organization slug
+- `SENTRY_PROJECT` - Default project slug
+
+## Version Pinning
+
+Users can pin to specific versions:
+
+```json
+{
+ "args": ["-y", "@sentry/mcp-server@1.2.3"]
+}
+```
+
+## Testing Releases
+
+### Local Testing Before Publishing
+
+Test the built package locally:
+
+```bash
+cd packages/mcp-server
+npm pack
+# Creates sentry-mcp-server-1.2.3.tgz
+
+# Test installation
+npm install -g ./sentry-mcp-server-1.2.3.tgz
+
+# Run stdio server
+SENTRY_ACCESS_TOKEN=... @sentry/mcp-server
+```
+
+### Beta Releases
+
+For testing with users before stable release:
+
+```bash
+npm publish --tag beta
+```
+
+Users install with:
+```json
+{
+ "args": ["-y", "@sentry/mcp-server@beta"]
+}
+```
+
+## Troubleshooting
+
+### Package Not Found
+- Verify package name: `@sentry/mcp-server` (with scope)
+- Check npm registry: `npm view @sentry/mcp-server`
+
+### Version Mismatch
+- Users may have cached version: `npx clear-npx-cache`
+- Recommend version pinning for stability
+
+### Build Failures
+- Ensure `pnpm run build` succeeds before publishing
+- Check TypeScript compilation errors
+- Verify all dependencies are listed in package.json
+
+## References
+
+- Package config: `packages/mcp-server/package.json`
+- stdio transport: `packages/mcp-server/src/transports/stdio.ts`
+- Build script: `packages/mcp-server/scripts/build.ts`
+- npm publishing docs: https://docs.npmjs.com/cli/publish
diff --git a/docs/search-events-api-patterns.md b/docs/search-events-api-patterns.md
new file mode 100644
index 000000000..70da507ab
--- /dev/null
+++ b/docs/search-events-api-patterns.md
@@ -0,0 +1,518 @@
+# Search Events API Patterns
+
+## Overview
+
+The `search_events` tool provides a unified interface for searching Sentry events across different datasets (errors, logs, spans). This document covers the API patterns, query structures, and best practices for both individual event queries and aggregate queries.
+
+## API Architecture
+
+### Legacy Discover API vs Modern EAP API
+
+Sentry uses two different API architectures depending on the dataset:
+
+1. **Legacy Discover API** (errors dataset)
+ - Uses the original Discover query format
+ - Simpler aggregate field handling
+ - Returns data in a different format
+
+2. **Modern EAP (Event Analytics Platform) API** (spans, logs datasets)
+ - Uses structured aggregate parameters
+ - More sophisticated query capabilities
+ - Different URL generation patterns
+
+### API Endpoint
+
+All queries use the same base endpoint:
+```
+/api/0/organizations/{organizationSlug}/events/
+```
+
+### Dataset Mapping
+
+The tool handles dataset name mapping internally:
+- User specifies `errors` β API uses `errors` (Legacy Discover)
+- User specifies `spans` β API uses `spans` (EAP)
+- User specifies `logs` β API uses `ourlogs` (EAP) β οΈ Note the transformation!
+
+## Query Modes
+
+### 1. Individual Events (Samples)
+
+Returns raw event data with full details. This is the default mode when no aggregate functions are used.
+
+**Key Characteristics:**
+- Returns actual event occurrences
+- Includes default fields plus any user-requested fields
+- Sorted by timestamp (newest first) by default
+- Limited to a specific number of results (default: 10, max: 100)
+
+**Example API URL:**
+```
+https://us.sentry.io/api/0/organizations/sentry/events/?dataset=spans&field=id&field=span.op&field=span.description&field=span.duration&field=transaction&field=timestamp&field=ai.model.id&field=ai.model.provider&field=project&field=trace&per_page=50&query=&sort=-timestamp&statsPeriod=24h
+```
+
+**Default Fields by Dataset:**
+
+- **Spans**: `id`, `span.op`, `span.description`, `span.duration`, `transaction`, `timestamp`, `project`, `trace`
+- **Errors**: `issue`, `title`, `project`, `timestamp`, `level`, `message`, `error.type`, `culprit`
+- **Logs**: `timestamp`, `project`, `message`, `severity`, `trace`
+
+### 2. Aggregate Queries (Statistics)
+
+Returns grouped and aggregated data, similar to SQL GROUP BY queries.
+
+**Key Characteristics:**
+- Activated when ANY field contains a function (e.g., `count()`, `avg()`)
+- Fields should ONLY include aggregate functions and groupBy fields
+- Do NOT include default fields (id, timestamp, etc.)
+- Automatically groups by all non-function fields
+
+**Example API URLs:**
+
+Single groupBy field:
+```
+https://us.sentry.io/api/0/organizations/sentry/events/?dataset=spans&field=ai.model.id&field=count()&per_page=50&query=&sort=-count&statsPeriod=24h
+```
+
+Multiple groupBy fields:
+```
+https://us.sentry.io/api/0/organizations/sentry/events/?dataset=spans&field=ai.model.id&field=ai.model.provider&field=sum(span.duration)&per_page=50&query=&sort=-sum_span_duration&statsPeriod=24h
+```
+
+## Query Parameters
+
+### Common Parameters
+
+| Parameter | Description | Example |
+|-----------|-------------|---------|
+| `dataset` | Which dataset to query | `spans`, `errors`, `logs` (API uses `ourlogs`) |
+| `field` | Fields to return (repeated for each field) | `field=span.op&field=count()` |
+| `query` | Sentry query syntax filter | `has:db.statement AND span.duration:>1000` |
+| `sort` | Sort order (prefix with `-` for descending) | `-timestamp`, `-count()` |
+| `per_page` | Results per page | `50` |
+| `statsPeriod` | Relative time window filter | `1h`, `24h`, `7d`, `14d`, `30d` |
+| `start` | Absolute start time (ISO 8601) | `2025-06-19T07:00:00` |
+| `end` | Absolute end time (ISO 8601) | `2025-06-20T06:59:59` |
+| `project` | Project ID (numeric, not slug) | `4509062593708032` |
+
+
+### Dataset-Specific Considerations
+
+#### Spans Dataset
+- Supports timestamp filters in query (e.g., `timestamp:-1h`)
+- Rich performance metrics available
+- Common aggregate functions: `count()`, `avg(span.duration)`, `p95(span.duration)`
+
+#### Errors Dataset
+- Supports timestamp filters in query
+- Issue grouping available via `issue` field
+- Common aggregate functions: `count()`, `count_unique(user.id)`, `last_seen()`
+
+#### Logs Dataset
+- Does NOT support timestamp filters in query (use `statsPeriod` instead)
+- Severity levels: fatal, error, warning, info, debug, trace
+- Common aggregate functions: `count()`, `epm()`
+- Uses `ourlogs` as the actual API dataset value (not `logs`)
+
+## Query Syntax
+
+### Basic Filters
+- Exact match: `field:value`
+- Wildcards: `field:*pattern*`
+- Comparison: `field:>100`, `field:<500`
+- Boolean: `AND`, `OR`, `NOT`
+- Phrases: `message:"database connection failed"`
+- Attribute existence: `has:field` (recommended for spans)
+
+### Attribute-Based Queries (Recommended for Spans)
+Instead of using `span.op` patterns, use `has:` queries for more flexible attribute-based filtering:
+- HTTP requests: `has:request.url` instead of `span.op:http*`
+- Database queries: `has:db.statement` or `has:db.system` instead of `span.op:db*`
+- AI/LLM calls: `has:ai.model.id` or `has:mcp.tool.name`
+
+### Aggregate Functions
+
+#### Universal Functions (all datasets)
+- `count()` - Count of events
+- `count_unique(field)` - Count of unique values
+- `epm()` - Events per minute rate
+
+#### Numeric Field Functions (spans, logs)
+- `avg(field)` - Average value
+- `sum(field)` - Sum of values
+- `min(field)` - Minimum value
+- `max(field)` - Maximum value
+- `p50(field)`, `p75(field)`, `p90(field)`, `p95(field)`, `p99(field)` - Percentiles
+
+#### Errors-Specific Functions
+- `count_if(field,equals,value)` - Conditional count
+- `last_seen()` - Most recent timestamp
+- `eps()` - Events per second rate
+
+## Examples
+
+### Find Database Queries (Individual Events)
+```
+Query: has:db.statement
+Fields: ["id", "span.op", "span.description", "span.duration", "transaction", "timestamp", "project", "trace", "db.system", "db.statement"]
+Sort: -span.duration
+Dataset: spans
+```
+
+### Top 10 Slowest API Endpoints (Aggregate)
+```
+Query: is_transaction:true
+Fields: ["transaction", "count()", "avg(span.duration)", "p95(span.duration)"]
+Sort: -avg(span.duration)
+Dataset: spans
+```
+
+### Error Count by Type (Aggregate)
+```
+Query: level:error
+Fields: ["error.type", "count()"]
+Sort: -count()
+Dataset: errors
+```
+
+### Logs by Severity (Aggregate)
+```
+Query: (empty)
+Fields: ["severity", "count()", "epm()"]
+Sort: -count()
+Dataset: logs
+```
+
+### Tool Calls by Model (Aggregate)
+```
+Query: has:mcp.tool.name
+Fields: ["ai.model.id", "mcp.tool.name", "count()"]
+Sort: -count()
+Dataset: spans
+```
+
+### HTTP Requests (Individual Events)
+```
+Query: has:request.url
+Fields: ["id", "span.op", "span.description", "span.duration", "transaction", "timestamp", "project", "trace", "request.url", "request.method"]
+Sort: -timestamp
+Dataset: spans
+```
+
+## Common Pitfalls
+
+1. **Mixing aggregate and non-aggregate fields**: Don't include fields like `timestamp` or `id` in aggregate queries
+2. **Wrong sort field**: The field you sort by must be included in the fields array
+3. **Timestamp filters on logs**: Use `statsPeriod` parameter instead of query filters
+4. **Using project slugs**: API requires numeric project IDs, not slugs
+5. **Dataset naming**: Use `logs` in the tool, but API expects `ourlogs`
+
+## Web UI URL Generation
+
+The tool automatically generates shareable Sentry web UI URLs after making API calls. These URLs allow users to view results in the Sentry interface:
+
+- **Errors dataset**: `/organizations/{org}/discover/results/`
+- **Spans dataset**: `/organizations/{org}/explore/traces/`
+- **Logs dataset**: `/organizations/{org}/explore/logs/`
+
+Note: The web UI URLs use different parameter formats than the API:
+- Legacy Discover uses simple field parameters
+- Modern Explore uses `aggregateField` with JSON-encoded values
+- The tool handles this transformation automatically in `buildDiscoverUrl()` and `buildEapUrl()`
+
+### Web URL Generation Parameters
+
+The `getEventsExplorerUrl()` method accepts these parameters to determine URL format:
+
+1. **organizationSlug**: Organization identifier
+2. **query**: The Sentry query string
+3. **projectSlug**: Numeric project ID (optional)
+4. **dataset**: "spans", "errors", or "logs"
+5. **fields**: Array of fields (used to detect if it's an aggregate query)
+6. **sort**: Sort parameter
+7. **aggregateFunctions**: Array of aggregate functions (e.g., `["count()", "avg(span.duration)"]`)
+8. **groupByFields**: Array of fields to group by (e.g., `["span.op", "ai.model.id"]`)
+
+Based on these parameters:
+- If `aggregateFunctions` has items β generates aggregate query URL
+- For errors dataset β routes to Legacy Discover URL format
+- For spans/logs datasets β routes to Modern Explore URL format with JSON-encoded `aggregateField` parameters
+
+## API vs Web UI URLs
+
+### Important Distinction
+
+The API and Web UI use different parameter formats:
+
+**API (Backend)**: Always uses the same format regardless of dataset
+- Endpoint: `/api/0/organizations/{org}/events/`
+- Parameters: `field`, `query`, `sort`, `dataset`, etc.
+- Example: `?dataset=spans&field=span.op&field=count()&sort=-count()`
+
+**Web UI (Frontend)**: Different formats for different pages
+- Legacy Discover: `/organizations/{org}/discover/results/`
+- Modern Explore: `/organizations/{org}/explore/{dataset}/`
+- Uses different parameter encoding (e.g., `aggregateField` with JSON for explore pages)
+
+### API Parameter Format
+
+The API **always** uses this format for all datasets:
+
+**Individual Events:**
+```
+?dataset=spans
+&field=id
+&field=span.op
+&field=span.description
+&query=span.op:db
+&sort=-timestamp
+&statsPeriod=24h
+```
+
+**Aggregate Queries:**
+```
+?dataset=spans
+&field=span.op
+&field=count()
+&query=span.op:db*
+&sort=-count()
+&statsPeriod=24h
+```
+
+The only difference between datasets is the `dataset` parameter value and available fields.
+
+## Time Range Filtering
+
+All API endpoints support time range filtering using either relative or absolute time parameters:
+
+**Relative Time** (`statsPeriod`):
+- Format: number + unit (e.g., `1h`, `24h`, `7d`, `30d`)
+- Default: `14d` (last 14 days)
+- Example: `?statsPeriod=7d`
+
+**Absolute Time** (`start` and `end`):
+- Format: ISO 8601 timestamps
+- Both parameters must be provided together
+- Example: `?start=2025-06-19T07:00:00&end=2025-06-20T06:59:59`
+
+**Important**: Cannot use both `statsPeriod` and `start`/`end` parameters in the same request.
+
+**Applies to**:
+- Events API: `/organizations/{org}/events/`
+- Tags API: `/organizations/{org}/tags/`
+- Trace Items Attributes API: `/organizations/{org}/trace-items/attributes/`
+
+## Attribute Lookup Endpoints
+
+### Overview
+
+Before translating queries, the tool fetches available attributes/fields for the organization. This ensures the AI knows about custom attributes specific to the organization.
+
+### Tags Endpoint (Errors Dataset)
+
+**Endpoint**: `/api/0/organizations/{org}/tags/`
+
+**Parameters**:
+- `dataset`: Always `events` for error data
+- `project`: Numeric project ID (optional)
+- `statsPeriod`: Time range (e.g., `24h`)
+- `useCache`: Set to `1` for performance
+- `useFlagsBackend`: Set to `1` for latest features
+
+**Example**:
+```
+https://us.sentry.io/api/0/organizations/sentry/tags/?dataset=events&project=4509062593708032&statsPeriod=24h&useCache=1&useFlagsBackend=1
+```
+
+**Response Format**:
+```json
+[
+ {
+ "key": "browser.name",
+ "name": "Browser Name"
+ },
+ {
+ "key": "custom.payment_method",
+ "name": "Payment Method"
+ }
+]
+```
+
+**Processing**:
+- Filters out `sentry:` prefixed tags (internal tags)
+- Maps to key-value pairs for the AI prompt
+
+### Trace Items Attributes Endpoint (Spans/Logs Datasets)
+
+**Endpoint**: `/api/0/organizations/{org}/trace-items/attributes/`
+
+**Parameters**:
+- `itemType`: Either `spans` or `logs` (plural!)
+- `attributeType`: Either `string` or `number`
+- `project`: Numeric project ID (optional)
+- `statsPeriod`: Time range
+
+**Examples**:
+
+Spans string attributes:
+```
+https://us.sentry.io/api/0/organizations/sentry/trace-items/attributes/?attributeType=string&itemType=spans&project=4509062593708032&statsPeriod=24h
+```
+
+Spans number attributes:
+```
+https://us.sentry.io/api/0/organizations/sentry/trace-items/attributes/?attributeType=number&itemType=spans&project=4509062593708032&statsPeriod=24h
+```
+
+Logs string attributes:
+```
+https://us.sentry.io/api/0/organizations/sentry/trace-items/attributes/?attributeType=string&itemType=logs&project=4509062593708032&statsPeriod=24h
+```
+
+**Response Format**:
+```json
+[
+ {
+ "key": "span.duration",
+ "name": "Span Duration",
+ "type": "number"
+ },
+ {
+ "key": "ai.model.id",
+ "name": "AI Model ID",
+ "type": "string"
+ }
+]
+```
+
+### Implementation Strategy
+
+The tool makes parallel requests to fetch attributes efficiently:
+
+1. **For errors**: Single request to tags endpoint with optimized parameters
+2. **For spans/logs**: Single request that internally fetches both string + number attributes
+
+```typescript
+// For errors dataset
+const tagsResponse = await apiService.listTags({
+ organizationSlug,
+ dataset: "events",
+ statsPeriod: "14d",
+ useCache: true,
+ useFlagsBackend: true
+});
+
+// For spans/logs datasets
+const attributesResponse = await apiService.listTraceItemAttributes({
+ organizationSlug,
+ itemType: "spans", // or "logs"
+ statsPeriod: "14d"
+});
+```
+
+Note: The `listTraceItemAttributes` method internally makes parallel requests for string and number attributes.
+
+### Custom Attributes Integration
+
+After fetching, custom attributes are merged with base fields:
+
+```typescript
+const allFields = {
+ ...BASE_COMMON_FIELDS, // Common fields across datasets
+ ...DATASET_FIELDS[dataset], // Dataset-specific fields
+ ...customAttributes // Organization-specific fields
+};
+```
+
+This ensures the AI knows about all available fields when translating queries.
+
+### Error Handling
+
+If attribute fetching fails:
+- The tool continues with just the base fields
+- Logs the error for debugging
+- Does not fail the entire query
+
+This graceful degradation ensures queries still work even if custom attributes can't be fetched.
+
+## Best Practices
+
+1. **Be specific with fields**: Only request fields you need
+2. **Use appropriate limits**: Default 10, max 100 per page
+3. **Leverage aggregate functions**: For summaries and statistics
+4. **Include context fields**: Add fields like `project`, `environment` when grouping
+5. **Sort meaningfully**: Use `-count()` for popularity, `-timestamp` for recency
+6. **Handle custom attributes**: Tool automatically fetches org-specific attributes
+7. **Understand dataset differences**: Each dataset has different capabilities and constraints
+
+## Implementation Details
+
+### Code Architecture
+
+The search_events tool handles the complexity of multiple API patterns:
+
+1. **AI Translation Layer**
+ - Uses OpenAI GPT-5 to translate natural language to Sentry query syntax
+ - Maintains dataset-specific system prompts with examples
+ - Aggregate functions and groupBy fields are derived from the fields array
+
+2. **Field Handling**
+ - Aggregate queries: Only includes aggregate functions and groupBy fields
+ - Non-aggregate queries: Uses default fields or AI-specified fields
+ - Validates that sort fields are included in the field list
+ - Detects aggregate queries by checking for function syntax in fields
+
+3. **Field Type Validation**
+ - Validates numeric aggregate functions (avg, sum, min, max, percentiles) are only used with numeric fields
+ - Tracks field types from both known fields and custom attributes
+ - Returns error messages when invalid combinations are attempted
+
+4. **Web UI URL Generation** (for shareable links)
+ - `buildDiscoverUrl()` for errors dataset β creates Discover page URLs
+ - `buildEapUrl()` for spans/logs datasets β creates Explore page URLs
+ - Transforms API response format to web UI parameter format
+ - Note: These methods generate web URLs, not API URLs
+
+### Response Format Differences
+
+**Legacy Discover Response (errors):**
+```json
+{
+ "data": [
+ {
+ "error.type": "TypeError",
+ "count()": 150,
+ "last_seen()": "2025-01-16T12:00:00Z"
+ }
+ ]
+}
+```
+
+**EAP Response (spans/logs):**
+```json
+{
+ "data": [
+ {
+ "span.op": "db.query",
+ "count()": 1250,
+ "avg(span.duration)": 45.3
+ }
+ ]
+}
+```
+
+## Troubleshooting
+
+### "Ordered by columns not selected" Error
+This occurs when sorting by a field not included in the field list. Ensure your sort field is in the fields array.
+
+### Empty Results
+- Check query syntax is valid
+- Verify time range (`statsPeriod`)
+- Ensure project has data for the selected dataset
+- Try broadening the query
+
+### API Errors
+- 400: Invalid query syntax or parameters (often due to field mismatch in aggregates)
+- 404: Project or organization not found
+- 500: Internal error (check Sentry status)
\ No newline at end of file
diff --git a/docs/security.mdc b/docs/security.mdc
new file mode 100644
index 000000000..0975e265d
--- /dev/null
+++ b/docs/security.mdc
@@ -0,0 +1,231 @@
+# Security
+
+Authentication and security patterns for the Sentry MCP server.
+
+## OAuth Architecture
+
+The MCP server acts as an OAuth proxy between clients and Sentry:
+
+```
+MCP Client β MCP Server β Sentry OAuth β Sentry API
+```
+
+### Key Components
+
+1. **OAuth Provider** (@cloudflare/workers-oauth-provider)
+ - Manages client authorization
+ - Stores tokens in KV storage
+ - Handles state management
+ - Sets auth props in ExecutionContext
+
+2. **Client Approval**
+ - First-time clients require user approval
+ - Approved clients stored in signed cookies
+ - Per-organization access control
+
+3. **Token Management**
+ - Access tokens encrypted in KV storage
+ - Tokens scoped to organizations
+
+## Implementation Patterns
+
+### OAuth Flow Handler
+
+See implementation: `packages/mcp-cloudflare/src/server/oauth/authorize.ts` and `packages/mcp-cloudflare/src/server/oauth/callback.ts`
+
+Key endpoints:
+
+- `/authorize` - Client approval and redirect to Sentry
+- `/callback` - Handle Sentry callback, store tokens
+- `/approve` - Process user approval
+
+### Required OAuth Scopes
+
+```typescript
+const REQUIRED_SCOPES = [
+ "org:read",
+ "project:read",
+ "issue:read",
+ "issue:write"
+];
+```
+
+### Security Context
+
+```typescript
+interface ServerContext {
+ userId?: string;
+ clientId: string;
+ accessToken: string;
+ grantedScopes?: Set;
+ constraints: Constraints;
+ sentryHost: string;
+ mcpUrl?: string;
+}
+```
+
+Context captured in closures during server build and propagated through:
+
+- Tool handlers (via closure capture and direct parameter passing)
+- API client initialization
+- Error messages (sanitized)
+
+## Security Measures
+
+### SSRF Protection
+
+The MCP server validates `regionUrl` parameters to prevent Server-Side Request Forgery (SSRF) attacks:
+
+```typescript
+// Region URL validation rules:
+// 1. By default, only the base host itself is allowed as regionUrl
+// 2. Additional domains must be in SENTRY_ALLOWED_REGION_DOMAINS allowlist
+// 3. Must use HTTPS protocol for security
+// 4. Empty/undefined regionUrl means use the base host
+
+// Base host always allowed
+validateRegionUrl("https://sentry.io", "sentry.io"); // β Base host match
+validateRegionUrl("https://mycompany.com", "mycompany.com"); // β Base host match
+
+// Allowlist domains (sentry.io, us.sentry.io, de.sentry.io)
+validateRegionUrl("https://us.sentry.io", "sentry.io"); // β In allowlist
+validateRegionUrl("https://de.sentry.io", "mycompany.com"); // β In allowlist
+validateRegionUrl("https://sentry.io", "mycompany.com"); // β In allowlist
+
+// Rejected domains
+validateRegionUrl("https://evil.com", "sentry.io"); // β Not in allowlist
+validateRegionUrl("http://us.sentry.io", "sentry.io"); // β Must use HTTPS
+validateRegionUrl("https://eu.sentry.io", "sentry.io"); // β Not in allowlist
+validateRegionUrl("https://sub.mycompany.com", "mycompany.com"); // β Not base host or allowlist
+```
+
+Implementation: `packages/mcp-server/src/internal/tool-helpers/validate-region-url.ts`
+
+### Prompt Injection Protection
+
+Tools that accept user input are vulnerable to prompt injection attacks. Key mitigations:
+
+1. **Parameter Validation**: All tool inputs validated with Zod schemas
+2. **URL Validation**: URLs parsed and validated before use
+3. **Region Constraints**: Region URLs restricted to known Sentry domains
+4. **No Direct Command Execution**: Tools don't execute user-provided commands
+
+Example protection in tools:
+
+```typescript
+// URLs must be valid and from expected domains
+if (!issueUrl.includes('sentry.io')) {
+ throw new UserInputError("Invalid Sentry URL");
+}
+
+// Region URLs validated against base host
+const validatedHost = validateRegionUrl(regionUrl, baseHost);
+```
+
+### State Parameter Protection
+
+The OAuth `state` is a compact HMAC-signed payload with a 10βminute expiry:
+
+```typescript
+// Payload contains only what's needed on callback
+type OAuthState = {
+ clientId: string;
+ redirectUri: string; // must be a valid URL
+ scope: string[]; // from OAuth provider parseAuthRequest
+ permissions?: string[]; // user selections from approval
+ iat: number; // issued at (ms)
+ exp: number; // expires at (ms)
+};
+
+// Sign: `${hex(hmacSHA256(json))}.${btoa(json)}` using COOKIE_SECRET
+const signed = `${signatureHex}.${btoa(JSON.stringify(state))}`;
+
+// On callback: split, verify signature, parse, check exp > Date.now()
+```
+
+Implementation: `packages/mcp-cloudflare/src/server/oauth/state.ts`
+
+### Input Validation
+
+All user inputs sanitized:
+
+- HTML content escaped
+- URLs validated
+- OAuth parameters verified
+
+### Cookie Security
+
+```typescript
+// Signed cookie for approved clients
+const cookie = await signCookie(
+ `approved_clients=${JSON.stringify(approvedClients)}`,
+ COOKIE_SECRET
+);
+
+// Cookie attributes
+"HttpOnly; Secure; SameSite=Lax; Max-Age=2592000" // 30 days
+```
+
+## Error Handling
+
+Security-aware error responses:
+
+- No token/secret exposure in errors
+- Generic messages for auth failures
+- Detailed logging server-side only
+
+```typescript
+catch (error) {
+ if (error.message.includes("token")) {
+ return new Response("Authentication failed", { status: 401 });
+ }
+ // Log full error server-side
+ console.error("OAuth error:", error);
+ return new Response("An error occurred", { status: 500 });
+}
+```
+
+## Multi-Tenant Security
+
+### Organization Isolation
+
+- Tokens scoped to organizations
+- Users can switch organizations
+- Each organization requires separate approval
+
+### Access Control
+
+```typescript
+// Verify organization access
+const orgs = await apiService.listOrganizations();
+if (!orgs.find(org => org.slug === requestedOrg)) {
+ throw new UserInputError("No access to organization");
+}
+```
+
+## Environment Variables
+
+Required for OAuth:
+
+```bash
+SENTRY_CLIENT_ID=your_oauth_app_id
+SENTRY_CLIENT_SECRET=your_oauth_app_secret
+COOKIE_SECRET=random_32_char_string
+```
+
+## CORS Configuration
+
+```typescript
+// Allowed origins for OAuth flow
+const ALLOWED_ORIGINS = [
+ "https://sentry.io",
+ "https://*.sentry.io"
+];
+```
+
+## References
+
+- OAuth implementation: `packages/mcp-cloudflare/src/server/oauth/*`
+- Cookie utilities: `packages/mcp-cloudflare/src/server/utils/cookies.ts`
+- OAuth Provider: `packages/mcp-cloudflare/src/server/bindings.ts`
+- Sentry OAuth docs:
diff --git a/docs/specs/README.md b/docs/specs/README.md
new file mode 100644
index 000000000..ffca52225
--- /dev/null
+++ b/docs/specs/README.md
@@ -0,0 +1,68 @@
+# Feature Specifications
+
+This directory contains detailed specifications for features in the Sentry MCP server. Each feature has its own subdirectory with related design documents, technical specifications, and implementation guides.
+
+
+## Purpose
+
+Feature specifications serve to:
+
+1. **Document Design Decisions**: Capture the reasoning behind architectural choices
+2. **Define Interfaces**: Specify tool inputs, outputs, and behavior
+3. **Guide Implementation**: Provide clear direction for developers
+4. **Enable Review**: Allow stakeholders to review and provide feedback
+5. **Preserve Knowledge**: Maintain historical context for future reference
+
+## Creating New Specifications
+
+When adding a new feature specification:
+
+1. Create a new directory under `specs/` with a descriptive name
+2. Create a **single, concise README.md file** that covers:
+ - Problem statement and motivation
+ - High-level design approach
+ - Interface definitions (with code examples)
+ - Key constraints and requirements
+ - Migration/compatibility concerns
+3. Update this README with a brief description
+4. Link to the spec from relevant documentation
+
+**Important Guidelines**:
+- Keep specs in a single file (README.md)
+- Focus on WHAT and WHY, not HOW
+- Include code examples for interfaces and usage
+- Document constraints and meta concerns
+- Avoid implementation details (no function internals, prompts, etc.)
+- Think "contract" not "blueprint"
+
+## Current Specifications
+
+### search-events
+A unified event search tool that uses OpenAI GPT-5 to translate natural language queries into Sentry's search syntax. Replaced the separate `find_errors` and `find_transactions` tools with a single, more powerful interface.
+
+- **Status**: β Complete
+- **Key Benefits**: Reduces tool count (20β19), improves UX, accessible to non-technical users
+
+## Specification Template
+
+For consistency, new specifications should include:
+
+1. **Overview**: Problem statement and proposed solution
+2. **Motivation**: Why this feature is needed
+3. **Design**: Technical architecture and approach
+4. **Interface**: API/tool definitions
+5. **Examples**: Usage scenarios and expected behavior
+6. **Implementation**: Step-by-step plan (NO time estimates)
+7. **Testing**: Validation strategy
+8. **Migration**: If replacing existing functionality
+9. **Future Work**: Potential enhancements
+
+**Important**: Do NOT include time windows, deadlines, or duration estimates in specifications. Implementation timing is determined by agents and project priorities, not by the spec.
+
+## Review Process
+
+1. Create specification documents in a feature branch
+2. Open PR for review by team members
+3. Address feedback and iterate
+4. Merge once consensus is reached
+5. Update status as implementation progresses
\ No newline at end of file
diff --git a/docs/specs/search-events.md b/docs/specs/search-events.md
new file mode 100644
index 000000000..8da956f3f
--- /dev/null
+++ b/docs/specs/search-events.md
@@ -0,0 +1,166 @@
+# search_events Tool Specification
+
+## Overview
+
+A unified search tool that accepts natural language queries and translates them to Sentry's discover endpoint parameters using OpenAI GPT-5. Replaces `find_errors` and `find_transactions` with a single, more flexible interface.
+
+## Motivation
+
+- **Before**: Two separate tools with rigid parameters, users must know Sentry query syntax
+- **After**: Single tool with natural language input, AI handles translation to Sentry syntax
+- **Benefits**: Better UX, reduced tool count (20 β 19), accessible to non-technical users
+
+## Interface
+
+```typescript
+interface SearchEventsParams {
+ organizationSlug: string; // Required
+ naturalLanguageQuery: string; // Natural language search description
+ dataset?: "spans" | "errors" | "logs"; // Dataset to search (default: "errors")
+ projectSlug?: string; // Optional - limit to specific project
+ regionUrl?: string;
+ limit?: number; // Default: 10, Max: 100
+ includeExplanation?: boolean; // Include translation explanation
+}
+```
+
+### Examples
+
+```typescript
+// Find errors (errors dataset is default)
+search_events({
+ organizationSlug: "my-org",
+ naturalLanguageQuery: "database timeouts in checkout flow from last hour"
+})
+
+// Find slow transactions
+search_events({
+ organizationSlug: "my-org",
+ naturalLanguageQuery: "API calls taking over 5 seconds",
+ projectSlug: "backend",
+ dataset: "spans"
+})
+
+// Find logs
+search_events({
+ organizationSlug: "my-org",
+ naturalLanguageQuery: "warning logs about memory usage",
+ dataset: "logs"
+})
+```
+
+## Architecture
+
+1. **Tool receives** natural language query and dataset selection
+2. **Fetches searchable attributes** based on dataset:
+ - For `spans`/`logs`: Uses `/organizations/{org}/trace-items/attributes/` endpoint with parallel calls for string and number attribute types
+ - For `errors`: Uses `/organizations/{org}/tags/` endpoint (legacy, will migrate when new API supports errors)
+3. **OpenAI GPT-5 translates** natural language to Sentry query syntax using:
+ - Comprehensive system prompt with Sentry query syntax rules
+ - Dataset-specific field mappings and query patterns
+ - Organization's custom attributes (fetched in step 2)
+4. **Executes** discover endpoint: `/organizations/{org}/events/` with:
+ - Translated query string
+ - Dataset-specific field selection
+ - Numeric project ID (converted from slug if provided)
+ - Proper dataset mapping (logs β ourlogs)
+5. **Returns** formatted results with:
+ - Dataset-specific rendering (console format for logs, cards for errors, timeline for spans)
+ - Prominent rendering directives for AI agents
+ - Shareable Sentry Explorer URL
+
+## Key Implementation Details
+
+### OpenAI Integration
+
+- **Model**: GPT-5 for natural language to Sentry query translation (configurable via `configureOpenAIProvider`)
+- **System prompt**: Contains comprehensive Sentry query syntax, dataset-specific rules, and available fields
+- **Environment**: Requires `OPENAI_API_KEY` environment variable
+- **Custom attributes**: Automatically fetched and included in system prompt for each organization
+
+### Dataset-Specific Translation
+
+The AI produces different query patterns based on the selected dataset:
+
+- **Spans dataset**: Focus on `span.op`, `span.description`, `span.duration`, `transaction`, supports timestamp filters
+- **Errors dataset**: Focus on `message`, `level`, `error.type`, `error.handled`, supports timestamp filters
+- **Logs dataset**: Focus on `message`, `severity`, `severity_number`, **NO timestamp filters** (uses statsPeriod instead)
+
+### Key Technical Constraints
+
+- **Logs timestamp handling**: Logs don't support query-based timestamp filters like `timestamp:-1h`. Instead, use `statsPeriod=24h` parameter
+- **Project ID mapping**: API requires numeric project IDs, not slugs. Tool automatically converts project slugs to IDs
+- **Parallel attribute fetching**: For spans/logs, fetches both string and number attribute types in parallel for better performance
+- **itemType specification**: Must use "logs" (plural) not "log" for the trace-items attributes API
+
+### Tool Removal
+
+- **Must remove** `find_errors` and `find_transactions` in same PR β
+ - Removed from tool exports
+ - Files still exist but are no longer used
+- **Migration required** for existing usage
+ - Updated `find_errors_in_file` prompt to use `search_events`
+- **Documentation** updates needed
+
+## Migration Examples
+
+```typescript
+// Before
+find_errors({
+ organizationSlug: "sentry",
+ filename: "checkout.js",
+ query: "is:unresolved"
+})
+
+// After
+search_events({
+ organizationSlug: "sentry",
+ naturalLanguageQuery: "unresolved errors in checkout.js"
+})
+```
+
+## Implementation Status
+
+### Completed Features
+
+1. **Custom attributes API integration**:
+ - β `/organizations/{org}/trace-items/attributes/` for spans/logs with parallel string/number fetching
+ - β `/organizations/{org}/tags/` for errors (legacy API)
+
+2. **Dataset mapping**:
+ - β User specifies `logs` β API uses `ourlogs`
+ - β User specifies `errors` β API uses `errors`
+ - β User specifies `spans` β API uses `spans`
+
+3. **URL Generation**:
+ - β Uses appropriate explore path based on dataset (`/explore/traces/`, `/explore/logs/`)
+ - β Query and project parameters properly encoded with numeric project IDs
+
+4. **Error Handling**:
+ - β Enhanced error messages with Sentry event IDs for debugging
+ - β Graceful handling of missing projects, API failures
+ - β Clear error messages for missing OpenAI API key
+
+5. **Output Formatting**:
+ - β Dataset-specific rendering instructions for AI agents
+ - β Console format for logs with severity emojis
+ - β Alert cards for errors with color-coded levels
+ - β Performance timeline for spans with duration bars
+
+## Success Criteria - All Complete β
+
+- β **Accurate translation of common query patterns** - GPT-5 with comprehensive system prompts
+- β **Proper handling of org-specific custom attributes** - Parallel fetching and integration
+- β **Seamless migration from old tools** - find_errors, find_transactions removed from exports
+- β **Maintains performance** - Parallel API calls, efficient caching, translation overhead minimal
+- β **Supports multiple datasets** - spans, errors, logs with dataset-specific handling
+- β **Generates shareable Sentry Explorer URLs** - Proper encoding with numeric project IDs
+- β **Clear output indicating URL should be shared** - Prominent sharing instructions
+- β **Comprehensive test coverage** - Unit tests, integration tests, and AI evaluations
+- β **Production ready** - Error handling, logging, graceful degradation
+
+## Dependencies
+
+- **Runtime**: OpenAI API key required (`OPENAI_API_KEY` environment variable)
+- **Build**: @ai-sdk/openai, ai packages added to dependencies
+- **Testing**: Comprehensive mocks for OpenAI and Sentry APIs
\ No newline at end of file
diff --git a/docs/specs/subpath-constraints.md b/docs/specs/subpath-constraints.md
new file mode 100644
index 000000000..813885e9f
--- /dev/null
+++ b/docs/specs/subpath-constraints.md
@@ -0,0 +1,79 @@
+# Subpath-Based Constraints (End-User Guide)
+
+## What constraints do
+
+Constraints let you scope your Sentry MCP session to a specific organization and optionally a project. When scoped, all tools automatically use that org/project by default and only access data you are permitted to see.
+
+## How to connect
+
+- No scope: connect to `/mcp` (or `/sse` for SSE transport)
+- Organization scope: `/mcp/{organizationSlug}`
+- Organization + project scope: `/mcp/{organizationSlug}/{projectSlug}`
+
+The same pattern applies to the SSE endpoint: `/sse`, `/sse/{org}`, `/sse/{org}/{project}`.
+
+Examples:
+
+```
+/mcp/sentry
+/mcp/sentry/my-project
+/sse/sentry
+/sse/sentry/my-project
+```
+
+## What youβll experience
+
+- Tools automatically use the constrained organization/project as defaults
+- You can still pass explicit `organizationSlug`/`projectSlug` to override defaults per call
+- If you donβt provide a scope, tools work across your accessible organizations when supported
+
+## Access verification
+
+When you connect with a scoped path, we validate that:
+- The slugs are well-formed
+- The organization exists and you have access
+- If a project is included, the project exists and you have access
+
+If thereβs a problem, youβll receive a clear HTTP error when connecting:
+- 400: Invalid slug format
+- 401: Missing authentication
+- 403: You donβt have access to the specified org/project
+- 404: Organization or project not found
+
+## Region awareness
+
+For Sentry Cloud, your organization may be hosted in a regional cluster. When you scope by organization, we automatically determine the region (if available) and use it for API calls. You donβt need to take any actionβthis happens behind the scenes. For self-hosted Sentry, the region concept doesnβt apply.
+
+## Best practices
+
+- Prefer scoping by organization (and project when known) to reduce ambiguity and improve safety
+- Use scoped sessions when collaborating across multiple orgs to avoid cross-org access by mistake
+- If a tool reports access errors, reconnect with a different scope or verify your permissions in Sentry
+
+## Frequently asked questions
+
+- Can I switch scope mid-session?
+ - Yes. Open a new connection using a different subpath (e.g., `/mcp/{org}/{project}`) and use that session.
+
+- Do I need to specify scope for documentation or metadata endpoints?
+ - No. Public metadata endpoints donβt require scope and support CORS.
+
+- How do tools know my scope?
+ - The MCP session embeds the constraints, and tools read them as defaults for `organizationSlug` and `projectSlug`.
+
+## Reference
+
+Supported URL patterns:
+```
+/mcp/{organizationSlug}/{projectSlug}
+/mcp/{organizationSlug}
+/mcp
+
+/sse/{organizationSlug}/{projectSlug}
+/sse/{organizationSlug}
+/sse
+```
+
+For implementation details and security notes, see:
+- `docs/cloudflare/constraint-flow-verification.md`
+- `docs/architecture.mdc`
diff --git a/docs/testing-remote.md b/docs/testing-remote.md
new file mode 100644
index 000000000..9daef023b
--- /dev/null
+++ b/docs/testing-remote.md
@@ -0,0 +1,737 @@
+# Testing the Remote MCP Server
+
+Complete playbook for building, deploying, and testing the remote MCP server via HTTP transport.
+
+## Overview
+
+The remote MCP server runs on Cloudflare Workers and provides HTTP-based access to the MCP protocol. Clients connect via HTTP/SSE instead of stdio pipes.
+
+**When to use remote:**
+- Testing OAuth flows
+- Testing constraint-based access control (org/project filtering)
+- Testing the web chat interface
+- Production-like environment testing
+- Multi-user scenarios
+
+**When to use stdio instead:**
+- Self-hosted Sentry without OAuth
+- IDE integration testing
+- Direct API token authentication
+- Local development without network
+
+## Prerequisites
+
+- Node.js 20+
+- pnpm installed
+- Wrangler CLI (for Cloudflare deployment)
+- Sentry OAuth application credentials
+
+## Setup
+
+### 1. Clone and Install
+
+```bash
+cd sentry-mcp
+pnpm install
+```
+
+### 2. Create Environment Files
+
+```bash
+# Creates .env files from examples
+make setup-env
+```
+
+### 3. Configure Sentry OAuth App
+
+**Create an OAuth App in Sentry:**
+
+1. Go to Settings β API β [Applications](https://sentry.io/settings/account/api/applications/)
+2. Click "Create New Application"
+3. Configure:
+ - **Name:** "Sentry MCP Development" (or similar)
+ - **Homepage URL:** `http://localhost:5173`
+ - **Authorized Redirect URIs:** `http://localhost:5173/oauth/callback`
+4. Save and note your **Client ID** and **Client Secret**
+
+### 4. Configure Environment Variables
+
+**Edit `packages/mcp-cloudflare/.env`:**
+```bash
+SENTRY_CLIENT_ID=your_development_client_id
+SENTRY_CLIENT_SECRET=your_development_client_secret
+COOKIE_SECRET=generate-a-random-32-char-string
+
+# Optional: For AI-powered search tools
+OPENAI_API_KEY=your-openai-key
+```
+
+**Edit `.env` (root):**
+```bash
+# For testing CLI client and search tools
+OPENAI_API_KEY=your-openai-key
+```
+
+**Generate COOKIE_SECRET:**
+```bash
+# Using OpenSSL
+openssl rand -base64 32
+
+# Using Node.js
+node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"
+```
+
+## Running the Remote Server
+
+### Option 1: Local Development Server
+
+**Start the dev server:**
+```bash
+# From repo root
+pnpm dev
+
+# Or from cloudflare package
+cd packages/mcp-cloudflare
+pnpm dev
+```
+
+Server runs at: `http://localhost:5173`
+
+**What this does:**
+- Starts Cloudflare Workers local dev environment
+- Enables hot reload for code changes
+- Uses local KV storage (not persisted)
+- Serves the web UI at root
+- MCP endpoint at `/mcp`
+
+### Option 2: Deploy to Cloudflare
+
+**Deploy to your Cloudflare account:**
+```bash
+cd packages/mcp-cloudflare
+pnpm deploy
+```
+
+**Deploy to production (requires permissions):**
+```bash
+# Automated via GitHub Actions on push to main
+# Manual deployment:
+pnpm deploy --env production
+```
+
+## Testing with the CLI Client
+
+The CLI client (`mcp-test-client`) provides a command-line interface for testing the remote server.
+
+**IMPORTANT: For local testing, you must have the dev server running first:**
+
+```bash
+# Start dev server in background (required for local testing)
+pnpm dev
+
+# Then in another terminal, run CLI tests
+pnpm -w run cli "your query"
+```
+
+### Basic Usage
+
+**Test against local dev server (default):**
+```bash
+# Single query
+pnpm -w run cli "who am I?"
+
+# Interactive mode
+pnpm -w run cli
+> who am I?
+> find my organizations
+> exit
+```
+
+**Test against production:**
+```bash
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "who am I?"
+```
+
+**Test with specific MCP URL:**
+```bash
+# Custom deployment
+pnpm -w run cli --mcp-host=https://your-worker.workers.dev "query"
+
+# Set via environment variable
+export MCP_URL=https://mcp.sentry.dev
+pnpm -w run cli "query"
+```
+
+### Testing Agent Mode
+
+Agent mode uses only the `use_sentry` tool (natural language interface):
+
+```bash
+# Test agent mode locally
+pnpm -w run cli --agent "show me my recent errors"
+
+# Test agent mode in production
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev --agent "what projects do I have?"
+```
+
+**Agent mode is ~2x slower** because it requires an additional AI call to translate natural language to tool calls.
+
+### OAuth Flow Testing
+
+**First run triggers OAuth:**
+```bash
+pnpm -w run cli "who am I?"
+```
+
+**Flow:**
+1. CLI opens browser to `http://localhost:5173/oauth/authorize`
+2. You're redirected to Sentry OAuth
+3. Approve access and grant permissions
+4. Redirected back with tokens
+5. CLI receives tokens and executes query
+
+**Subsequent runs use cached tokens:**
+- Tokens stored in `~/.sentry-mcp-tokens.json`
+- Automatically refreshed when expired
+- To force re-auth: delete the token file
+
+### Testing with Constraints
+
+Constraints limit access to specific organizations/projects:
+
+**Organization constraint:**
+```bash
+# Access limited to "sentry" org
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp/sentry "list my projects"
+```
+
+**Organization + Project constraint:**
+```bash
+# Access limited to "sentry/javascript"
+pnpm -w run cli \
+ --mcp-host=http://localhost:5173/mcp/sentry/javascript \
+ "show me recent errors"
+```
+
+**Verify constraints work:**
+```bash
+# Should only return projects in "sentry" org
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp/sentry "find_projects()"
+
+# Should return error if trying to access different org
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp/sentry "find_projects(organizationSlug='other-org')"
+```
+
+## Testing with Web Chat Interface
+
+The web UI provides a chat interface for testing the MCP server.
+
+### Access the Interface
+
+**Local development:**
+1. Start dev server: `pnpm dev`
+2. Open browser: `http://localhost:5173`
+3. Follow OAuth flow to authenticate
+
+**Production:**
+1. Navigate to `https://mcp.sentry.dev`
+2. Follow OAuth flow
+
+### Testing Workflow
+
+**1. Authentication:**
+- Click "Connect to Sentry"
+- Authorize the application
+- Grant permissions (select skills)
+
+**2. Basic Queries:**
+- "Who am I?" - Test authentication
+- "Show my organizations" - Test data access
+- "List projects in [org-name]" - Test specific queries
+
+**3. Complex Operations:**
+- "Find unresolved errors in [project]"
+- "Show me performance issues from yesterday"
+- "Search for 'database timeout' errors"
+
+**4. Test Constraints:**
+Navigate to constrained URL:
+- `http://localhost:5173/mcp/your-org` - Org constraint
+- `http://localhost:5173/mcp/your-org/your-project` - Org + project constraint
+
+Verify queries are limited to that scope.
+
+### Chat Interface Features
+
+**Available in chat:**
+- Natural language queries
+- Tool call visualization
+- Streaming responses
+- Error display
+- Token refresh handling
+
+**Not available (use CLI for these):**
+- Direct tool calls with parameters
+- Agent mode toggle
+- Multiple simultaneous queries
+
+## Testing with MCP Inspector
+
+The MCP Inspector can test remote servers via HTTP transport.
+
+### Setup Inspector
+
+```bash
+# From repo root
+pnpm inspector
+```
+
+Opens at `http://localhost:6274`
+
+### Connect to Remote Server
+
+**1. Add Server:**
+- Click "Add Server"
+- Select "SSE" or "HTTP" transport
+- Enter URL: `http://localhost:5173/mcp`
+
+**2. Authenticate:**
+- Click "Connect"
+- Browser opens for OAuth flow
+- Approve access
+- Redirected back to Inspector
+
+**3. Test Tools:**
+- Click "List Tools" - Verify tools appear
+- Test individual tools with parameters
+- View responses and errors
+
+### Inspector Testing Patterns
+
+**Basic verification:**
+1. List tools β Verify expected tools available
+2. Call `whoami` β Verify authentication
+3. Call `find_organizations` β Verify data access
+
+**Parameter testing:**
+```json
+// Test find_projects with parameters
+{
+ "organizationSlug": "your-org",
+ "query": "bookmarks:true"
+}
+```
+
+**Error testing:**
+```json
+// Invalid org should error gracefully
+{
+ "organizationSlug": "nonexistent-org"
+}
+```
+
+**Complex tool testing:**
+```json
+// Test search_events with AI
+{
+ "organizationSlug": "your-org",
+ "naturalLanguageQuery": "errors in the last hour",
+ "dataset": "errors"
+}
+```
+
+## Common Testing Workflows
+
+### 1. End-to-End OAuth Flow
+
+```bash
+# Clean slate
+rm ~/.sentry-mcp-tokens.json
+
+# Test fresh OAuth
+pnpm -w run cli "who am I?"
+
+# Verify tokens cached
+ls -la ~/.sentry-mcp-tokens.json
+
+# Test cached tokens work
+pnpm -w run cli "list organizations"
+```
+
+### 2. Test Skills Permissions
+
+**In OAuth approval screen, test:**
+- Minimal skills (inspect, docs only)
+- Default skills (inspect, seer, docs)
+- All skills (inspect, seer, docs, triage, project-management)
+
+**Verify tools filtered by skills:**
+```bash
+# With inspect, docs only: no write tools
+pnpm -w run cli "list tools" | grep "create_"
+
+# With all skills: includes write tools
+# Should see: create_project, create_team, update_issue, etc.
+```
+
+### 3. Test Multi-Organization Access
+
+```bash
+# User with multiple orgs
+pnpm -w run cli "find_organizations()"
+
+# Should list all accessible orgs
+# Test switching between orgs in chat interface
+```
+
+### 4. Test Constraints
+
+```bash
+# No constraint - full access
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp \
+ "find_organizations()"
+
+# Org constraint - limited access
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp/sentry \
+ "find_organizations()"
+ # Should only return "sentry" org
+
+# Project constraint - most limited
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp/sentry/javascript \
+ "find_projects()"
+ # Should only return "javascript" project
+```
+
+### 5. Test After Code Changes
+
+```bash
+# 1. Build changes
+pnpm -w run build
+
+# 2. Restart dev server
+pnpm dev
+
+# 3. Test with CLI
+pnpm -w run cli "who am I?"
+
+# 4. Test in web UI
+# Open http://localhost:5173 and test manually
+
+# 5. Run integration tests
+cd packages/mcp-cloudflare
+pnpm test
+```
+
+### 6. Test Token Refresh
+
+```bash
+# Get initial tokens
+pnpm -w run cli "who am I?"
+
+# Simulate token expiry (edit token file)
+# Set expires_at to past timestamp in ~/.sentry-mcp-tokens.json
+
+# Next request should refresh automatically
+pnpm -w run cli "who am I?"
+
+# Verify new token in file
+cat ~/.sentry-mcp-tokens.json | jq .expires_at
+```
+
+## Troubleshooting
+
+### "Failed to connect to MCP server"
+
+**Causes:**
+1. Dev server not running
+2. Wrong URL
+3. Network issues
+
+**Solution:**
+```bash
+# Verify dev server is running
+curl http://localhost:5173/
+
+# Check MCP endpoint
+curl http://localhost:5173/mcp
+
+# Restart dev server
+pnpm dev
+```
+
+### "OAuth authorization failed"
+
+**Causes:**
+1. Invalid client ID/secret
+2. Redirect URI mismatch
+3. Expired OAuth app
+
+**Solution:**
+```bash
+# Verify credentials in .env
+cat packages/mcp-cloudflare/.env | grep SENTRY_CLIENT
+
+# Check OAuth app settings in Sentry
+# Redirect URI must match exactly: http://localhost:5173/oauth/callback
+
+# Try with fresh credentials
+# Delete and recreate OAuth app in Sentry
+```
+
+### "Permission denied" errors
+
+**Cause:** Insufficient skills granted during OAuth.
+
+**Solution:**
+```bash
+# Force re-authorization with more skills
+rm ~/.sentry-mcp-tokens.json
+pnpm -w run cli "who am I?"
+
+# In OAuth approval screen, select all needed skills
+```
+
+### "Tool not found" errors
+
+**Causes:**
+1. Tool filtered by skills
+2. Build issue
+3. Server version mismatch
+
+**Solution:**
+```bash
+# Check tool list
+pnpm -w run cli "list tools" | jq '.tools[] | .name'
+
+# Verify skills include required permissions
+# Example: create_project requires project-management skill
+
+# Rebuild and restart
+pnpm -w run build && pnpm dev
+```
+
+### "Invalid constraint" errors
+
+**Cause:** Trying to access resources outside constrained scope.
+
+**Solution:**
+```bash
+# Verify constraint in URL
+echo "Accessing: http://localhost:5173/mcp/org-slug/project-slug"
+
+# Verify you have access to that org/project
+pnpm -w run cli --mcp-host=http://localhost:5173/mcp \
+ "find_organizations()"
+```
+
+### Web UI not loading
+
+**Causes:**
+1. Build failed
+2. Assets not compiled
+3. Wrangler issue
+
+**Solution:**
+```bash
+# Rebuild assets
+cd packages/mcp-cloudflare
+pnpm build
+
+# Clear Wrangler cache
+rm -rf .wrangler
+
+# Restart dev server
+pnpm dev
+
+# Check browser console for errors
+```
+
+### "Rate limited" errors
+
+**Cause:** Too many requests to Sentry API.
+
+**Solution:**
+```bash
+# Wait for rate limit to reset (usually 60 seconds)
+
+# Use fewer requests in testing
+# Example: Don't query all projects repeatedly
+
+# For testing, use mocked responses instead
+cd packages/mcp-server
+pnpm test
+```
+
+## Quality Checks
+
+Before deploying changes:
+
+```bash
+# 1. Build successfully
+pnpm -w run build
+
+# 2. Type check
+pnpm -w run tsc
+
+# 3. Lint
+pnpm -w run lint
+
+# 4. Run tests
+pnpm -w run test
+
+# 5. Test OAuth flow locally
+rm ~/.sentry-mcp-tokens.json
+pnpm -w run cli "who am I?"
+
+# 6. Test web UI locally
+# Open http://localhost:5173 and verify:
+# - Authentication works
+# - Chat interface works
+# - Tool calls execute correctly
+
+# 7. Test with Inspector
+pnpm inspector
+# Connect to http://localhost:5173/mcp
+# Verify tools list and basic operations
+```
+
+## Deployment Checklist
+
+### Before Production Deploy
+
+- [ ] All tests pass
+- [ ] OAuth flow tested locally
+- [ ] Web UI tested locally
+- [ ] CLI client tested against local server
+- [ ] Constraints tested (org and project level)
+- [ ] Error handling verified
+- [ ] Token refresh tested
+
+### Production Deploy
+
+```bash
+# Via GitHub Actions (automatic)
+git push origin main
+
+# Manual (if needed)
+cd packages/mcp-cloudflare
+pnpm deploy --env production
+```
+
+### After Deploy
+
+- [ ] Test OAuth against production
+- [ ] Test web UI at https://mcp.sentry.dev
+- [ ] Test CLI: `pnpm -w run cli --mcp-host=https://mcp.sentry.dev "who am I?"`
+- [ ] Test MCP Inspector against production
+- [ ] Verify no errors in Cloudflare dashboard
+- [ ] Check Sentry for any errors
+
+## Comparing with Stdio
+
+Key differences to verify:
+
+| Feature | Stdio | Remote |
+|---------|-------|--------|
+| Authentication | Access token | OAuth |
+| Constraints | Via CLI flags | Via URL path |
+| Transport | stdin/stdout | HTTP/SSE |
+| Multi-user | No | Yes |
+| Token refresh | N/A | Automatic |
+| Web UI | No | Yes |
+| Performance | Faster (no network) | Network latency |
+
+**Test both work identically:**
+```bash
+# Stdio
+pnpm start --access-token=TOKEN
+# (Use MCP Inspector with stdio config)
+
+# Remote
+pnpm -w run cli "who am I?"
+
+# Both should:
+# - Return same tool list
+# - Execute tools with same results
+# - Handle errors the same way
+```
+
+## Advanced Testing
+
+### Testing Token Encryption
+
+```bash
+# Verify tokens are encrypted in KV
+# (In Cloudflare dashboard, inspect KV values)
+# Should see encrypted blobs, not plaintext tokens
+```
+
+### Load Testing
+
+```bash
+# Use autocannon for HTTP load testing
+npm install -g autocannon
+
+# Test MCP endpoint
+autocannon -c 10 -d 30 http://localhost:5173/mcp
+```
+
+### Testing Multiple Clients
+
+```bash
+# Terminal 1: Client A
+pnpm -w run cli "who am I?"
+
+# Terminal 2: Client B (different user)
+rm ~/.sentry-mcp-tokens.json
+pnpm -w run cli "who am I?"
+
+# Verify both work independently
+# Verify constraints apply per-client
+```
+
+### Testing Regional Deployments
+
+```bash
+# Test with different Sentry regions
+pnpm -w run cli --mcp-host=https://us.mcp.sentry.dev "query"
+pnpm -w run cli --mcp-host=https://eu.mcp.sentry.dev "query"
+```
+
+## Environment-Specific Testing
+
+### Testing Production
+
+```bash
+# Use production URL
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "who am I?"
+
+# Test production OAuth app
+# (Requires production OAuth credentials)
+```
+
+### Testing Staging
+
+```bash
+# Use staging deployment
+pnpm -w run cli --mcp-host=https://staging.mcp.sentry.dev "who am I?"
+```
+
+### Testing Self-Hosted
+
+```bash
+# Deploy to self-hosted Cloudflare account
+cd packages/mcp-cloudflare
+pnpm deploy
+
+# Test with self-hosted URL
+pnpm -w run cli --mcp-host=https://your-worker.workers.dev "who am I?"
+```
+
+## References
+
+- Remote setup: `docs/cloudflare/deployment.md`
+- OAuth architecture: `docs/cloudflare/oauth-architecture.md`
+- CLI client: `packages/mcp-test-client/README.md`
+- Cloudflare package: `packages/mcp-cloudflare/README.md`
+- MCP Inspector: https://modelcontextprotocol.io/docs/tools/inspector
diff --git a/docs/testing-stdio.md b/docs/testing-stdio.md
new file mode 100644
index 000000000..ff060e8a0
--- /dev/null
+++ b/docs/testing-stdio.md
@@ -0,0 +1,688 @@
+# Testing the Stdio Implementation
+
+Complete playbook for building, running, and testing the stdio MCP server.
+
+## Overview
+
+The stdio transport runs the MCP server as a subprocess that communicates via stdin/stdout pipes. This is the standard way IDEs and local tools integrate with MCP servers.
+
+**When to use stdio:**
+- Testing with IDEs (Cursor, VSCode with MCP extension)
+- Self-hosted Sentry deployments
+- Local development without OAuth
+- Direct API token authentication
+
+**When to use remote instead:**
+- Testing OAuth flows
+- Testing constraint-based access control
+- Testing the web chat interface
+- Production-like environment testing
+
+## Prerequisites
+
+- Node.js 20+
+- pnpm installed
+- Sentry access token with appropriate scopes
+
+### Required Scopes
+
+For full functionality, create a Sentry User Auth Token with:
+- `org:read` - List organizations
+- `project:read` - Access project data
+- `project:write` - Create/update projects
+- `team:read` - Access team data
+- `team:write` - Create teams
+- `event:read` - Access events and issues
+- `event:write` - Update issues, add comments
+
+For read-only testing, use just: `org:read`, `project:read`, `team:read`, `event:read`
+
+## Build Process
+
+### 1. Initial Setup
+
+```bash
+# Clone and install dependencies
+cd sentry-mcp
+pnpm install
+
+# Set up environment (optional for stdio)
+make setup-env
+```
+
+### 2. Build the Package
+
+```bash
+# Build all packages (includes mcp-server)
+pnpm -w run build
+
+# Or build just mcp-server
+cd packages/mcp-server
+pnpm build
+```
+
+The build process:
+1. Generates tool definitions (`pnpm run generate-definitions`)
+2. Compiles TypeScript to JavaScript
+3. Creates both ESM and CJS outputs
+4. Generates type declarations
+5. Makes the CLI executable
+
+**Output location:** `packages/mcp-server/dist/`
+
+### 3. Verify Build
+
+```bash
+# Check the built executable exists
+ls -la packages/mcp-server/dist/index.js
+
+# Test the CLI help
+node packages/mcp-server/dist/index.js --help
+```
+
+## Running Stdio Locally
+
+### Option 1: Using pnpm start (Development)
+
+Best for active development with TypeScript sources:
+
+```bash
+cd packages/mcp-server
+pnpm start --access-token=YOUR_TOKEN
+```
+
+This uses `tsx` to run TypeScript directly without building.
+
+### Option 2: Using the Built Package (Production-like)
+
+Test the actual build output:
+
+```bash
+# From repo root
+node packages/mcp-server/dist/index.js --access-token=YOUR_TOKEN
+
+# Or use the workspace command
+pnpm -w run mcp-server --access-token=YOUR_TOKEN
+```
+
+### Option 3: Using npx (End-user Experience)
+
+Test the published package experience:
+
+```bash
+# Latest from npm
+npx @sentry/mcp-server@latest --access-token=YOUR_TOKEN
+
+# Test local build (after packing)
+cd packages/mcp-server
+pnpm pack
+npx ./sentry-mcp-server-*.tgz --access-token=YOUR_TOKEN
+```
+
+## Testing with MCP Inspector
+
+The MCP Inspector is the best tool for interactive testing of the stdio transport.
+
+### 1. Start the Inspector
+
+```bash
+# From repo root
+pnpm inspector
+```
+
+This opens the MCP Inspector at `http://localhost:6274`
+
+### 2. Connect to Stdio Server
+
+**In the Inspector UI:**
+
+1. Click "Add Server"
+2. Select "Stdio" transport type
+3. Configure the command:
+
+**For development (TypeScript):**
+```json
+{
+ "command": "pnpm",
+ "args": [
+ "--dir",
+ "/absolute/path/to/sentry-mcp/packages/mcp-server",
+ "start",
+ "--access-token=YOUR_TOKEN"
+ ]
+}
+```
+
+**For built package:**
+```json
+{
+ "command": "node",
+ "args": [
+ "/absolute/path/to/sentry-mcp/packages/mcp-server/dist/index.js",
+ "--access-token=YOUR_TOKEN"
+ ]
+}
+```
+
+**For self-hosted Sentry:**
+```json
+{
+ "command": "npx",
+ "args": [
+ "@sentry/mcp-server@latest",
+ "--access-token=YOUR_TOKEN",
+ "--host=sentry.example.com"
+ ]
+}
+```
+
+4. Click "Connect"
+5. Click "List Tools" to verify connection
+
+### 3. Test Tools Interactively
+
+**Basic workflow:**
+1. **List Tools** - Verify expected tools appear
+2. **Call a tool** - Start with `whoami` (no parameters required)
+3. **Test with parameters** - Try `find_organizations()`
+4. **Test complex operations** - Try `search_events(naturalLanguageQuery="errors in the last hour")`
+
+**Example test sequence:**
+```
+1. whoami()
+2. find_organizations()
+3. find_projects(organizationSlug="your-org")
+4. search_events(
+ organizationSlug="your-org",
+ naturalLanguageQuery="errors from yesterday"
+ )
+```
+
+## Testing with CLI Client (Recommended for Quick Tests)
+
+The `mcp-test-client` package provides a CLI-based way to test the stdio transport without needing a browser.
+
+### Transport Selection
+
+The CLI client automatically selects the transport based on flags:
+
+- **Stdio transport**: `--access-token` flag provided
+- **Remote HTTP transport**: `--mcp-host` flag or no access token
+
+### Basic Usage
+
+**Test stdio transport (local):**
+```bash
+# Single query
+pnpm -w run cli --access-token=YOUR_TOKEN "who am I?"
+
+# Interactive mode
+pnpm -w run cli --access-token=YOUR_TOKEN
+> who am I?
+> find my organizations
+> exit
+```
+
+**Verify stdio is being used:**
+Look for this in the output:
+```
+β Connected to MCP server (stdio)
+ βΏ 20 tools available
+```
+
+### Example Test Session
+
+```bash
+# Test 1: Verify connection and tool count
+$ pnpm -w run cli --access-token=YOUR_TOKEN "list all available tools"
+
+β Connected to MCP server (stdio)
+ βΏ 20 tools available
+
+β Here are the available tools:
+ 1. whoami
+ 2. find_organizations
+ 3. find_teams
+ [... 17 more tools ...]
+
+# Test 2: Test a specific tool
+$ pnpm -w run cli --access-token=YOUR_TOKEN "who am I?"
+
+β Connected to MCP server (stdio)
+ βΏ 20 tools available
+
+β whoami()
+ βΏ You are authenticated as: user@example.com
+```
+
+### Testing with Fake Token
+
+For testing the stdio mechanics without real API calls:
+
+```bash
+# This will test:
+# - Stdio transport initialization β
+# - Tool loading β
+# - Tool execution β
+# - API error handling β (expected 400 error)
+
+pnpm -w run cli --access-token=test-fake-token-12345 "who am I?"
+```
+
+**Expected output:**
+```
+β Connected to MCP server (stdio)
+ βΏ 20 tools available
+
+β whoami()
+ βΏ **Input Error**
+ API error (400): Bad Request
+```
+
+### Comparing Stdio vs Remote
+
+```bash
+# Stdio (uses --access-token)
+pnpm -w run cli --access-token=YOUR_TOKEN "query"
+# Output: "Connected to MCP server (stdio)"
+
+# Remote HTTP (uses --mcp-host or defaults to http://localhost:5173)
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query"
+# Output: "Connected to MCP server (remote)"
+```
+
+## Testing with IDEs
+
+### Cursor IDE
+
+**1. Configure in Cursor Settings:**
+
+Create or edit `.cursor/config.json`:
+
+```json
+{
+ "mcpServers": {
+ "sentry": {
+ "command": "npx",
+ "args": [
+ "@sentry/mcp-server@latest",
+ "--access-token=YOUR_TOKEN"
+ ]
+ }
+ }
+}
+```
+
+**2. Use Environment Variables (Recommended):**
+
+Create `.cursor/.env`:
+```bash
+SENTRY_ACCESS_TOKEN=your-token-here
+```
+
+Update config:
+```json
+{
+ "mcpServers": {
+ "sentry": {
+ "command": "npx",
+ "args": ["@sentry/mcp-server@latest"],
+ "env": {
+ "SENTRY_ACCESS_TOKEN": "${SENTRY_ACCESS_TOKEN}"
+ }
+ }
+ }
+}
+```
+
+**3. Test in Cursor:**
+- Restart Cursor
+- Open command palette (Cmd/Ctrl + Shift + P)
+- Search for "MCP" to verify server is connected
+- Ask Cursor: "What Sentry projects do I have access to?"
+
+### VSCode with MCP Extension
+
+**1. Install MCP Extension:**
+- Search for "Model Context Protocol" in VSCode extensions
+- Install the official MCP extension
+
+**2. Configure in VSCode Settings:**
+
+Add to `.vscode/settings.json`:
+```json
+{
+ "mcp.servers": {
+ "sentry": {
+ "command": "npx",
+ "args": [
+ "@sentry/mcp-server@latest",
+ "--access-token=YOUR_TOKEN"
+ ]
+ }
+ }
+}
+```
+
+**3. Test:**
+- Reload window (Cmd/Ctrl + Shift + P β "Reload Window")
+- Use MCP-aware AI features to access Sentry data
+
+## Configuration Options
+
+### Command-Line Flags
+
+```bash
+# Basic usage
+--access-token=TOKEN # Sentry access token (required)
+
+# Host configuration
+--host=sentry.example.com # Self-hosted Sentry (hostname only)
+
+# Skills management
+--skills=inspect,docs,triage # Limit to specific skills (default: all available)
+
+# AI features (optional)
+--openai-base-url=URL # Custom OpenAI endpoint
+
+# Sentry reporting
+--sentry-dsn=DSN # Custom Sentry DSN for telemetry
+--sentry-dsn= # Disable telemetry
+
+# Agent mode (testing use_sentry tool)
+--agent # Enable agent mode (only use_sentry tool)
+
+# Help
+--help # Show all options
+--version # Show version
+```
+
+### Environment Variables
+
+```bash
+# Authentication
+SENTRY_ACCESS_TOKEN=your-token
+
+# Host (self-hosted only)
+SENTRY_HOST=sentry.example.com
+
+# Skills
+MCP_SKILLS=inspect,docs,triage # Limit to specific skills
+
+# AI features
+OPENAI_API_KEY=your-key # For search_events/search_issues
+
+# Sentry reporting
+SENTRY_DSN=your-dsn
+```
+
+**Priority:** Command-line flags override environment variables.
+
+## Common Testing Workflows
+
+### 1. Test After Code Changes
+
+```bash
+# Full rebuild and test cycle
+pnpm -w run build && pnpm -w run test
+
+# Quick smoke test
+cd packages/mcp-server
+pnpm start --access-token=TOKEN &
+sleep 2
+echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | pnpm start --access-token=TOKEN
+```
+
+### 2. Test Against Self-Hosted Sentry
+
+```bash
+# Development server
+pnpm start \
+ --access-token=TOKEN \
+ --host=sentry.local.dev
+
+# Built package
+node dist/index.js \
+ --access-token=TOKEN \
+ --host=sentry.local.dev
+```
+
+### 3. Test Skills
+
+```bash
+# Test with all skills (default)
+pnpm start --access-token=TOKEN
+
+# Test with specific skills only
+pnpm start --access-token=TOKEN --skills=inspect,docs
+
+# Test read-only skills
+pnpm start --access-token=TOKEN --skills=inspect,seer,docs
+```
+
+### 4. Test AI-Powered Tools
+
+```bash
+# With OpenAI API key
+OPENAI_API_KEY=your-key pnpm start --access-token=TOKEN
+
+# Test search_events and search_issues work
+# In MCP Inspector:
+# - Call search_events(naturalLanguageQuery="errors in production")
+# - Call search_issues(naturalLanguageQuery="unresolved crashes")
+```
+
+### 5. Test Agent Mode
+
+```bash
+# Enable agent mode (only use_sentry tool available)
+pnpm start --access-token=TOKEN --agent
+
+# In Inspector, verify:
+# - Only "use_sentry" tool appears in list
+# - Test: use_sentry(request="show me my organizations")
+```
+
+## Troubleshooting
+
+### "Command not found: npx @sentry/mcp-server"
+
+**Cause:** Package not published or not in npm registry.
+
+**Solution:**
+```bash
+# Use local build instead
+cd packages/mcp-server
+pnpm build
+node dist/index.js --access-token=TOKEN
+```
+
+### "Missing required parameter: access-token"
+
+**Cause:** No authentication provided.
+
+**Solution:**
+```bash
+# Option 1: CLI flag
+pnpm start --access-token=YOUR_TOKEN
+
+# Option 2: Environment variable
+export SENTRY_ACCESS_TOKEN=YOUR_TOKEN
+pnpm start
+```
+
+### "AI-powered search tools unavailable"
+
+**Cause:** Missing `OPENAI_API_KEY`.
+
+**Solution:**
+```bash
+# Set the API key
+export OPENAI_API_KEY=your-openai-key
+pnpm start --access-token=TOKEN
+
+# Or disable warning by acknowledging it
+# (All other tools will work normally)
+```
+
+### "Cannot connect to Sentry API"
+
+**Causes:**
+1. Invalid access token
+2. Wrong host configuration
+3. Network issues
+
+**Solution:**
+```bash
+# Test token manually
+curl -H "Authorization: Bearer YOUR_TOKEN" \
+ https://sentry.io/api/0/
+
+# For self-hosted, verify host
+curl -H "Authorization: Bearer YOUR_TOKEN" \
+ https://your-sentry-host.com/api/0/
+
+# Check logs for detailed error
+pnpm start --access-token=TOKEN 2>&1 | tee debug.log
+```
+
+### "MCP Inspector can't connect to stdio server"
+
+**Causes:**
+1. Incorrect command path
+2. Missing dependencies
+3. Process exits immediately
+
+**Solution:**
+```bash
+# Test command manually first
+pnpm start --access-token=TOKEN
+
+# If it works, use absolute paths in Inspector
+which node # Get absolute path to node
+pwd # Get absolute path to project
+
+# Use absolute paths in Inspector config
+{
+ "command": "/usr/local/bin/node",
+ "args": ["/absolute/path/to/sentry-mcp/packages/mcp-server/dist/index.js"]
+}
+```
+
+### "Permission denied" errors
+
+**Cause:** Built executable not marked as executable.
+
+**Solution:**
+```bash
+chmod +x packages/mcp-server/dist/index.js
+```
+
+### "Module not found" errors after build
+
+**Cause:** Missing dependencies in built output.
+
+**Solution:**
+```bash
+# Clean and rebuild
+cd packages/mcp-server
+rm -rf dist
+pnpm build
+
+# Verify dependencies are bundled
+ls -lh dist/
+```
+
+## Quality Checks
+
+Before committing changes that affect stdio:
+
+```bash
+# 1. Build successfully
+pnpm -w run build
+
+# 2. Type check
+pnpm -w run tsc
+
+# 3. Lint
+pnpm -w run lint
+
+# 4. Unit tests pass
+cd packages/mcp-server
+pnpm test
+
+# 5. Smoke test stdio works
+echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | \
+ node dist/index.js --access-token=TOKEN
+```
+
+## Comparing with Remote
+
+To verify stdio behaves the same as remote:
+
+**Test with both transports:**
+```bash
+# 1. Test stdio locally
+pnpm start --access-token=TOKEN
+# Use MCP Inspector to test tools
+
+# 2. Test remote
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "who am I"
+
+# 3. Compare results
+# Both should return same data, same tool list
+```
+
+**Key differences to expect:**
+- **Authentication:** Stdio uses access tokens, remote uses OAuth
+- **Constraints:** Remote supports URL-based org/project constraints
+- **Tools:** Both should have same tool count and functionality
+- **Performance:** Stdio has no network overhead (faster)
+
+## Advanced Testing
+
+### Testing Custom Builds
+
+```bash
+# Pack the local build
+cd packages/mcp-server
+pnpm pack
+
+# Install globally for testing
+npm install -g ./sentry-mcp-server-*.tgz
+
+# Test as end-user would
+sentry-mcp --access-token=TOKEN
+```
+
+### Testing with Different Node Versions
+
+```bash
+# Using nvm
+nvm install 20
+nvm use 20
+pnpm start --access-token=TOKEN
+
+nvm install 22
+nvm use 22
+pnpm start --access-token=TOKEN
+```
+
+### Performance Testing
+
+```bash
+# Time tool execution
+time node dist/index.js --access-token=TOKEN < {
+ const context = { value: 42 };
+ const fn = () => context;
+ expect(fn().value).toBe(42);
+});
+```
+
+**Good approach** (testing our functionality):
+```typescript
+// β Test that server configuration works
+it("builds server with context for tool handlers", async () => {
+ const server = buildServer({ context });
+ expect(server).toBeDefined();
+});
+```
+
+## Testing Levels
+
+### 1. Functional Tests
+Fast, focused tests of actual functionality:
+- Located alongside source files (`*.test.ts`)
+- Use Vitest with inline snapshots
+- Mock external APIs only (Sentry API, OpenAI) with MSW
+- Use real implementations for internal code
+- Test through public APIs rather than implementation details
+
+### 2. Evaluation Tests
+Real-world scenarios with LLM:
+- Located in `packages/mcp-server-evals`
+- Use actual AI models
+- Verify end-to-end functionality
+- Test complete workflows
+
+### 3. Manual Testing
+Interactive testing with the MCP test client (preferred for testing MCP changes):
+
+```bash
+# Test with local dev server (default: http://localhost:5173)
+pnpm -w run cli "who am I?"
+
+# Test agent mode (use_sentry tool only) - approximately 2x slower
+pnpm -w run cli --agent "who am I?"
+
+# Test against production
+pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query"
+
+# Test with local stdio mode (requires SENTRY_ACCESS_TOKEN)
+pnpm -w run cli --access-token=TOKEN "query"
+```
+
+**When to use manual testing:**
+- Verifying end-to-end MCP server behavior
+- Testing OAuth flows
+- Debugging tool interactions
+- Validating real API responses
+- Testing AI-powered tools (search_events, search_issues, use_sentry)
+
+**Note:** The CLI defaults to `http://localhost:5173` for easier local development. Override with `--mcp-host` or set `MCP_URL` environment variable to test against different servers.
+
+## Functional Testing Patterns
+
+See `adding-tools.mdc#step-3-add-tests` for the complete tool testing workflow.
+
+### Basic Test Structure
+
+```typescript
+describe("tool_name", () => {
+ it("returns formatted output", async () => {
+ const result = await TOOL_HANDLERS.tool_name(mockContext, {
+ organizationSlug: "test-org",
+ param: "value"
+ });
+
+ expect(result).toMatchInlineSnapshot(`
+ "# Expected Output
+
+ Formatted markdown response"
+ `);
+ });
+});
+```
+
+**NOTE**: Follow error handling patterns from `common-patterns.mdc#error-handling` when testing error cases.
+
+### Testing Error Cases
+
+```typescript
+it("validates required parameters", async () => {
+ await expect(
+ TOOL_HANDLERS.tool_name(mockContext, {})
+ ).rejects.toThrow(UserInputError);
+});
+
+it("handles API errors gracefully", async () => {
+ server.use(
+ http.get("*/api/0/issues/*", () =>
+ HttpResponse.json({ detail: "Not found" }, { status: 404 })
+ )
+ );
+
+ await expect(handler(mockContext, params))
+ .rejects.toThrow("Issue not found");
+});
+```
+
+## Mock Server Setup
+
+Use MSW patterns from `api-patterns.mdc#mock-patterns` for API mocking.
+
+### Test Configuration
+
+```typescript
+// packages/mcp-server/src/test-utils/setup.ts
+import { setupMockServer } from "@sentry-mcp/mocks";
+
+export const mswServer = setupMockServer();
+
+// Global test setup
+beforeAll(() => mswServer.listen({ onUnhandledRequest: "error" }));
+afterEach(() => mswServer.resetHandlers());
+afterAll(() => mswServer.close());
+```
+
+### Mock Context
+
+```typescript
+export const mockContext: ServerContext = {
+ host: "sentry.io",
+ accessToken: "test-token",
+ organizationSlug: "test-org"
+};
+```
+
+## Snapshot Testing
+
+### When to Use Snapshots
+
+Use inline snapshots for:
+- Tool output formatting
+- Error message text
+- Markdown responses
+- JSON structure validation
+
+### Updating Snapshots
+
+When output changes are intentional:
+
+```bash
+cd packages/mcp-server
+pnpm vitest --run -u
+```
+
+**Always review snapshot changes before committing!**
+
+### Snapshot Best Practices
+
+```typescript
+// Good: Inline snapshot for output verification
+expect(result).toMatchInlineSnapshot(`
+ "# Issues in **my-org**
+
+ Found 2 unresolved issues"
+`);
+
+// Bad: Don't use snapshots for dynamic data
+expect(result.timestamp).toMatchInlineSnapshot(); // β
+```
+
+## Evaluation Testing
+
+### Eval Test Structure
+
+```typescript
+import { describeEval } from "vitest-evals";
+import { TaskRunner, Factuality } from "./utils";
+
+describeEval("tool-name", {
+ data: async () => [
+ {
+ input: "Natural language request",
+ expected: "Expected response content"
+ }
+ ],
+ task: TaskRunner(), // Uses AI to call tools
+ scorers: [Factuality()], // Validates output
+ threshold: 0.6,
+ timeout: 30000
+});
+```
+
+### Running Evals
+
+```bash
+# Requires OPENAI_API_KEY in .env
+pnpm eval
+
+# Run specific eval
+pnpm eval tool-name
+```
+
+## Test Data Management
+
+### Using Fixtures
+
+```typescript
+import { issueFixture } from "@sentry-mcp/mocks";
+
+// Modify fixture for test case
+const customIssue = {
+ ...issueFixture,
+ status: "resolved",
+ id: "CUSTOM-123"
+};
+```
+
+### Dynamic Test Data
+
+```typescript
+// Generate test data
+function createTestIssues(count: number) {
+ return Array.from({ length: count }, (_, i) => ({
+ ...issueFixture,
+ id: `TEST-${i}`,
+ title: `Test Issue ${i}`
+ }));
+}
+```
+
+## Performance Testing
+
+### Timeout Configuration
+
+```typescript
+it("handles large datasets", async () => {
+ const largeDataset = createTestIssues(1000);
+
+ const result = await handler(mockContext, params);
+ expect(result).toBeDefined();
+}, { timeout: 10000 }); // 10 second timeout
+```
+
+### Memory Testing
+
+```typescript
+it("streams large responses efficiently", async () => {
+ const initialMemory = process.memoryUsage().heapUsed;
+
+ await processLargeDataset();
+
+ const memoryIncrease = process.memoryUsage().heapUsed - initialMemory;
+ expect(memoryIncrease).toBeLessThan(50 * 1024 * 1024); // < 50MB
+});
+```
+
+## Common Testing Patterns
+
+See `common-patterns.mdc` for:
+- Mock server setup
+- Error handling tests
+- Parameter validation
+- Response formatting
+
+## CI/CD Integration
+
+Tests run automatically on:
+- Pull requests
+- Main branch commits
+- Pre-release checks
+
+Coverage requirements:
+- Statements: 80%
+- Branches: 75%
+- Functions: 80%
+
+## References
+
+- Test setup: `packages/mcp-server/src/test-utils/`
+- Mock server: `packages/mcp-server-mocks/`
+- Eval tests: `packages/mcp-server-evals/`
+- Vitest docs: https://vitest.dev/
\ No newline at end of file
diff --git a/docs/token-cost-tracking.mdc b/docs/token-cost-tracking.mdc
new file mode 100644
index 000000000..063f364fd
--- /dev/null
+++ b/docs/token-cost-tracking.mdc
@@ -0,0 +1,133 @@
+---
+description: How to measure and track the token cost of MCP tool definitions
+globs:
+alwaysApply: false
+---
+# Token Cost Tracking
+
+Measures the static overhead of MCP tool definitions - the tokens sent to LLM clients with every request.
+
+## What's Being Measured
+
+The token cost of tool metadata that MCP sends to clients via `tools/list`:
+- Tool names and descriptions
+- Parameter schemas (JSON Schema)
+- Total overhead per tool and across all tools
+
+**Exclusions:**
+- `use_sentry` tool (agent-mode only, not exposed via standard MCP)
+- Runtime token usage by embedded agents (search_events, search_issues)
+
+## Running Locally
+
+**Display table (default):**
+```bash
+pnpm run measure-tokens
+```
+
+**Output:**
+```
+π MCP Server Token Cost Report
+ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+Total Tokens: 9,069
+Tool Count: 19
+Average/Tool: 477
+ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+
+Per-Tool Breakdown:
+
+βββββββββββββββββββββββββββββββ¬βββββββββ¬ββββββββββ
+β Tool β Tokens β % Total β
+βββββββββββββββββββββββββββββββΌβββββββββΌββββββββββ€
+β search_docs β 1036 β 11.4% β
+β update_issue β 757 β 8.3% β
+...
+```
+
+**Write JSON to file:**
+```bash
+# From repository root
+pnpm run measure-tokens -- -o token-stats.json
+
+# Or from mcp-server package
+cd packages/mcp-server
+pnpm run measure-tokens -- -o token-stats.json
+```
+
+JSON format:
+```json
+{
+ "total_tokens": 9069,
+ "tool_count": 19,
+ "avg_tokens_per_tool": 477,
+ "tools": [
+ {"name": "search_docs", "tokens": 1036, "percentage": 11.4},
+ ...
+ ]
+}
+```
+
+## CI/CD Integration
+
+GitHub Actions workflow runs on every PR and push to main:
+
+**On Pull Requests:**
+- π **PR Comment:** Automatic comment with full report (updated on each push)
+- π **Job Summary:** Detailed per-tool breakdown in Actions tab
+- π¦ **Artifact:** `token-stats-{sha}.json` stored for 90 days
+
+**On Main Branch:**
+- π **Job Summary:** Detailed per-tool breakdown in Actions tab
+- π¦ **Artifact:** `token-stats-{sha}.json` stored for 90 days
+
+**Workflow:** `.github/workflows/token-cost.yml`
+
+## Understanding the Results
+
+**Current baseline (19 tools, excluding use_sentry):**
+- ~9,069 tokens total
+- ~477 tokens/tool average
+
+**Tool count limits:**
+- **Target:** β€20 tools (current best practice)
+- **Maximum:** β€25 tools (hard limit for AI agents)
+
+**When to investigate:**
+- Total tokens increase >10% without new tools
+- Individual tool >1,000 tokens (indicates overly verbose descriptions)
+- New tool adds >500 tokens (review description clarity)
+
+## Implementation Details
+
+**Tokenizer:** Uses `tiktoken` with GPT-4's `cl100k_base` encoding (good approximation for Claude).
+
+**Script location:** `packages/mcp-server/scripts/measure-token-cost.ts`
+
+**CLI options:**
+```bash
+tsx measure-token-cost.ts # Display table
+tsx measure-token-cost.ts -o file.json # Write JSON to file
+tsx measure-token-cost.ts --help # Show help
+```
+
+## Optimizing Token Cost
+
+**Reduce description verbosity:**
+- Be concise - LLMs don't need hand-holding
+- Remove redundant examples
+- Focus on unique, non-obvious details
+
+**Simplify parameter schemas:**
+- Use `.describe()` sparingly
+- Avoid duplicate descriptions in nested schemas
+- Combine related parameters
+
+**Consolidate tools:**
+- Before adding a new tool, check if existing tools can handle it
+- Consider parameter variants instead of separate tools
+
+## References
+
+- Script: `packages/mcp-server/scripts/measure-token-cost.ts`
+- Workflow: `.github/workflows/token-cost.yml`
+- Tool limits: See "Tool Count Limits" in `docs/adding-tools.mdc`
diff --git a/package.json b/package.json
index 1a29e8ffb..351695439 100644
--- a/package.json
+++ b/package.json
@@ -20,6 +20,7 @@
"url": "git@github.com:getsentry/sentry-mcp.git"
},
"scripts": {
+ "docs:check": "node scripts/check-doc-links.mjs",
"dev": "dotenv -e .env -e .env.local -- turbo dev",
"build": "turbo build after-build",
"deploy": "turbo deploy",
@@ -29,25 +30,29 @@
"lint": "biome lint",
"lint:fix": "biome lint --fix",
"inspector": "pnpx @modelcontextprotocol/inspector@latest",
+ "measure-tokens": "pnpm run --filter ./packages/mcp-server measure-tokens",
"prepare": "simple-git-hooks",
+ "cli": "pnpm run --filter ./packages/mcp-test-client start",
"start:stdio": "pnpm --stream run --filter ./packages/mcp-server start",
"test": "dotenv -e .env -e .env.local -- turbo test",
- "test:ci": "CI=true dotenv -e .env -e .env.local -- pnpm --stream -r run test:ci"
+ "test:ci": "CI=true dotenv -e .env -e .env.local -- pnpm --stream -r run test:ci",
+ "test:watch": "dotenv -e .env -e .env.local -- turbo test:watch",
+ "tsc": "turbo tsc"
},
"dependencies": {
- "@biomejs/biome": "^1.9.4",
- "@types/node": "^22.15.15",
- "@vitest/coverage-v8": "^3.1.3",
- "dotenv": "^16.5.0",
- "dotenv-cli": "^8.0.0",
- "lint-staged": "^15.5.2",
- "simple-git-hooks": "^2.13.0",
- "tsdown": "^0.10.2",
- "tsx": "^4.19.4",
- "turbo": "^2.5.3",
- "typescript": "^5.8.3",
- "vitest": "^3.1.3",
- "vitest-evals": "^0.2.0"
+ "@biomejs/biome": "catalog:",
+ "@types/node": "catalog:",
+ "@vitest/coverage-v8": "catalog:",
+ "dotenv": "catalog:",
+ "dotenv-cli": "catalog:",
+ "lint-staged": "catalog:",
+ "simple-git-hooks": "catalog:",
+ "tsdown": "catalog:",
+ "tsx": "catalog:",
+ "turbo": "catalog:",
+ "typescript": "catalog:",
+ "vitest": "catalog:",
+ "vitest-evals": "catalog:"
},
"simple-git-hooks": {
"pre-commit": "pnpm exec lint-staged --concurrent false"
@@ -67,5 +72,8 @@
"simple-git-hooks",
"workerd"
]
+ },
+ "devDependencies": {
+ "@types/json-schema": "^7.0.15"
}
}
diff --git a/packages/mcp-cloudflare/.env.example b/packages/mcp-cloudflare/.env.example
new file mode 100644
index 000000000..33e68eb3f
--- /dev/null
+++ b/packages/mcp-cloudflare/.env.example
@@ -0,0 +1,27 @@
+# Sentry OAuth Application Credentials
+# Create an OAuth app at: https://sentry.io/settings/account/api/applications/
+# - Homepage URL: http://localhost:5173 (for local dev)
+# - Authorized Redirect URIs: http://localhost:5173/oauth/callback (for local dev)
+SENTRY_CLIENT_ID=
+
+# Client Secret from your Sentry OAuth application
+# Generate this when creating your OAuth app in Sentry
+SENTRY_CLIENT_SECRET=
+
+# Cookie encryption secret for session management
+# Generate a random string (32+ characters recommended)
+# Example: openssl rand -base64 32
+COOKIE_SECRET=thisisasecret
+
+# OpenAI API key for AI-powered search tools (search_events, search_issues)
+# Get yours at: https://platform.openai.com/api-keys
+# Required for natural language query translation features
+OPENAI_API_KEY=sk-proj-generate-this
+
+# The URL where your MCP server is hosted
+# Local development: http://localhost:5173
+# Production: Your deployed URL (e.g., https://your-app.pages.dev)
+MCP_HOST=http://localhost:5173
+
+# Enable Spotlight
+SENTRY_SPOTLIGHT=1
diff --git a/packages/mcp-cloudflare/index.html b/packages/mcp-cloudflare/index.html
index 24d414823..0819d9b58 100644
--- a/packages/mcp-cloudflare/index.html
+++ b/packages/mcp-cloudflare/index.html
@@ -1,14 +1,36 @@
-
-
-
- Sentry MCP
-
-
-
-
-
-
-
+
+
+
+
+ Sentry MCP
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/packages/mcp-cloudflare/package.json b/packages/mcp-cloudflare/package.json
index 369958384..0a083055f 100644
--- a/packages/mcp-cloudflare/package.json
+++ b/packages/mcp-cloudflare/package.json
@@ -1,6 +1,6 @@
{
"name": "@sentry/mcp-cloudflare",
- "version": "0.7.1",
+ "version": "0.21.0",
"private": true,
"type": "module",
"license": "FSL-1.1-ALv2",
@@ -16,48 +16,58 @@
"scripts": {
"build": "tsc -b && vite build",
"dev": "vite",
- "deploy": "npm exec wrangler deploy",
+ "deploy": "pnpm exec wrangler deploy",
+ "cf:versions:upload": "npx wrangler versions upload",
"preview": "vite preview",
"cf-typegen": "wrangler types",
- "test": "vitest",
- "test:ci": "vitest run --coverage --reporter=junit --outputFile=tests.junit.xml",
- "test:watch": "vitest watch"
+ "test": "vitest run",
+ "test:ci": "vitest run --coverage --reporter=default --reporter=junit --outputFile=tests.junit.xml",
+ "test:watch": "vitest",
+ "tsc": "tsc --noEmit"
},
"devDependencies": {
- "@cloudflare/vite-plugin": "^1.1.1",
- "@cloudflare/vitest-pool-workers": "^0.8.26",
- "@cloudflare/workers-types": "^4.20250507.0",
+ "@cloudflare/vite-plugin": "^1.13.15",
+ "@cloudflare/workers-types": "catalog:",
"@sentry/mcp-server": "workspace:*",
+ "@sentry/mcp-server-mocks": "workspace:*",
"@sentry/mcp-server-tsconfig": "workspace:*",
- "@sentry/vite-plugin": "^3.4.0",
- "@tailwindcss/typography": "^0.5.16",
- "@tailwindcss/vite": "^4.1.5",
- "@types/react": "^19.1.3",
- "@types/react-dom": "^19.1.3",
- "@vitejs/plugin-react": "^4.4.1",
- "better-sqlite3": "^11.9.1",
- "tailwindcss": "^4.1.5",
- "vite": "^6.3.5",
- "wrangler": "~4.13.2"
+ "@sentry/vite-plugin": "catalog:",
+ "@tailwindcss/typography": "catalog:",
+ "@tailwindcss/vite": "catalog:",
+ "@types/react": "catalog:",
+ "@types/react-dom": "catalog:",
+ "@types/react-scroll-to-bottom": "^4.2.5",
+ "@vitejs/plugin-react": "catalog:",
+ "tailwindcss": "catalog:",
+ "urlpattern-polyfill": "^10.1.0",
+ "vite": "catalog:",
+ "vitest": "catalog:",
+ "wrangler": "^4.45.0"
},
"dependencies": {
- "@cloudflare/workers-oauth-provider": "^0.0.5",
- "@modelcontextprotocol/sdk": "^1.11.0",
- "@radix-ui/react-accordion": "^1.2.10",
- "@radix-ui/react-slot": "^1.2.2",
- "@sentry/cloudflare": "9.16.1",
- "@sentry/react": "9.16.1",
- "agents": "~0.0.79",
- "better-sqlite3": "^11.9.1",
- "class-variance-authority": "^0.7.1",
- "clsx": "^2.1.1",
- "hono": "^4.7.8",
- "lucide-react": "^0.503.0",
- "react": "^19.1.0",
- "react-dom": "^19.1.0",
- "tailwind-merge": "^3.2.0",
- "tw-animate-css": "^1.2.9",
- "workers-mcp": "0.1.0-3",
- "zod": "^3.24.4"
+ "@ai-sdk/openai": "catalog:",
+ "@ai-sdk/react": "catalog:",
+ "@cloudflare/workers-oauth-provider": "catalog:",
+ "@modelcontextprotocol/sdk": "catalog:",
+ "@radix-ui/react-accordion": "catalog:",
+ "@radix-ui/react-slot": "catalog:",
+ "@sentry/cloudflare": "catalog:",
+ "@sentry/react": "catalog:",
+ "agents": "catalog:",
+ "ai": "catalog:",
+ "better-sqlite3": "catalog:",
+ "class-variance-authority": "catalog:",
+ "clsx": "catalog:",
+ "hono": "catalog:",
+ "lucide-react": "catalog:",
+ "react": "catalog:",
+ "react-dom": "catalog:",
+ "react-markdown": "catalog:",
+ "react-scroll-to-bottom": "^4.2.0",
+ "remark-gfm": "catalog:",
+ "tailwind-merge": "catalog:",
+ "tw-animate-css": "catalog:",
+ "workers-mcp": "catalog:",
+ "zod": "catalog:"
}
}
diff --git a/public/favicon.ico b/packages/mcp-cloudflare/public/favicon.ico
similarity index 100%
rename from public/favicon.ico
rename to packages/mcp-cloudflare/public/favicon.ico
diff --git a/packages/mcp-cloudflare/public/flow-transparent.png b/packages/mcp-cloudflare/public/flow-transparent.png
new file mode 100644
index 000000000..d546305a3
Binary files /dev/null and b/packages/mcp-cloudflare/public/flow-transparent.png differ
diff --git a/packages/mcp-cloudflare/public/flow.jpg b/packages/mcp-cloudflare/public/flow.jpg
new file mode 100644
index 000000000..250d769ba
Binary files /dev/null and b/packages/mcp-cloudflare/public/flow.jpg differ
diff --git a/packages/mcp-cloudflare/src/client/App.tsx b/packages/mcp-cloudflare/src/client/App.tsx
deleted file mode 100644
index e069a887d..000000000
--- a/packages/mcp-cloudflare/src/client/App.tsx
+++ /dev/null
@@ -1,268 +0,0 @@
-import { TOOL_DEFINITIONS } from "@sentry/mcp-server/toolDefinitions";
-import { RESOURCES } from "@sentry/mcp-server/resources";
-import { PROMPT_DEFINITIONS } from "@sentry/mcp-server/promptDefinitions";
-import { Heading, Link } from "./components/ui/base";
-import {
- Accordion,
- AccordionContent,
- AccordionItem,
- AccordionTrigger,
-} from "./components/ui/accordion";
-import Note from "./components/ui/note";
-import { ChevronRight } from "lucide-react";
-import { Header } from "./components/ui/header";
-import flowImage from "./flow.png";
-import { Button } from "./components/ui/button";
-import RemoteSetup from "./components/fragments/remote-setup";
-import { useState } from "react";
-import StdioSetup from "./components/fragments/stdio-setup";
-import Section from "./components/ui/section";
-import { Prose } from "./components/ui/prose";
-
-export default function App() {
- const [stdio, setStdio] = useState(false);
-
- return (
-
-
-
-
-
-
-
-
-
- This service provides a Model Context Provider (MCP) for
- interacting with{" "}
- Sentry's API.
-
-
-
- MCP is pretty sweet. Cloudflare's support of MCP is pretty
- sweet. Sentry is pretty sweet. So we made an MCP for Sentry
- on top of Cloudflare.
-
- David Cramer, Sentry
-
-
What is a Model Context Provider?
-
- Simply put, its a way to plug Sentry's API into an LLM,
- letting you ask questions about your data in context of the
- LLM itself. This lets you take an agent that you already use,
- like Cursor, and pull in additional information from Sentry to
- help with tasks like debugging, code generation, and more.
-
-
-
- This project is still in its infancy as development of the MCP
- specification is ongoing. If you find any problems, or have an
- idea for how we can improve it, please let us know on{" "}
-
- GitHub
-
-
-
Interested in learning more?
-
-
-
- Using Sentry's Seer via MCP
-
-
-
-
- Building Sentry's MCP on Cloudflare
-
-
-
-
-
-
-
-
Getting Started
-
-
- /
-
-
- >
- }
- >
- {stdio ? : }
-
-
-
-
-
- Here's a few sample workflows (prompts) that we've tried to
- design around within the provider:
-
-
-
- {[
- "Check Sentry for errors in file.tsx and propose solutions.",
- "Diagnose issue ISSUE_URL and propose solutions.",
- "What are my latest issues in ORG/PROJECT?",
- "Create a new project in Sentry for PROJECT and setup local instrumentation using it.",
- "Use Sentry's Seer and help me analyze and propose a solution for ISSUE_URL.",
- ].map((prompt) => (
-
-
-
-
-
{prompt}
-
- ))}
-
-
-
-
-
-
- Tools are pre-configured functions that can be used to help
- with common tasks.
-
-
-
- Note: Any tool that takes an{" "}
- organization_slug parameter will try to infer a
- default organization, otherwise you should mention it in the
- prompt.
-
-
- {TOOL_DEFINITIONS.map((tool) => (
-
-
- {tool.name}
-
-
-
-
- If you've got a client that natively supports the current MCP
- specification, including OAuth, you can connect directly.
+ Path Constraints: Restrict the session to a specific
+ organization or project by adding them to the URL path. This ensures
+ all tools operate within the specified scope.
-
-
Integration Guides
+
+
+ /:organization β Limit to one organization
+
+
+ /:organization/:project β Limit to a specific project
+
+
+
+ Agent Mode: Reduce context by exposing a single{" "}
+ use_sentry tool instead of individual tools. The embedded
+ AI agent handles natural language requests and automatically chains
+ tool calls as needed. Note: Agent mode approximately doubles response
+ time due to the embedded AI layer.
+
If this doesn't work, you can manually add the server using the
following steps:
@@ -118,15 +235,16 @@ export default function RemoteSetup() {
MCP: Add Server.
The stdio client is made available on npm at{" "}
-
- @sentry/mcp-server
+
+ {NPM_PACKAGE_NAME}
.
@@ -29,8 +57,21 @@ export default function RemoteSetup() {
- Create a Personal Access Token in your account settings with the
- following scopes:
+ The CLI targets Sentry's hosted service by default. Add host overrides
+ only when you run self-hosted Sentry.
+
+
+
+ Create a User Auth Token in your account settings with the following
+ scopes:
+
+
+ AI-powered search: If you want the
+ search_events and search_issues tools to
+ translate natural language queries, add an
+ OPENAI_API_KEY next to your Sentry token. The rest of the
+ MCP server works without it, so you can skip this step if you do not
+ need those tools.
- You'll then bind that to your MCP instance using the following
- command:
-
+
Now wire up that token to the MCP configuration:
-
- Note: We enable Sentry reporting by default (to
- sentry.io). If you wish to disable it, pass --sentry-dsn={" "}
- with an empty value.
+
+
+ Using with Self-Hosted Sentry
+
+
+ You'll need to provide the hostname of your self-hosted Sentry
+ instance:
+
+
+
+
+
+ Configuration
+
+
+
+
+
+ Core setup
+
+
+
+ --access-token / SENTRY_ACCESS_TOKEN
+
+
Required user auth token.
+
+
+ --host / SENTRY_HOST
+
+
+ Hostname override when you run self-hosted Sentry.
+
+
+
+ --sentry-dsn / SENTRY_DSN
+
+
+ Send telemetry elsewhere or disable it by passing an empty
+ value.
+
+
+
+ OPENAI_API_KEY
+
+
+ Optional for the standard tools, but required for the AI-powered
+ search tools (search_events /
+ search_issues). When unset, those tools stay hidden
+ but everything else works as usual.
+
+
+
+
+
+
+ Constraints
+
+
+
+ --organization-slug
+
+
+ Scope all tools to a single organization (CLI only).
+
+
+
+ --project-slug
+
+
+ Scope all tools to a specific project within that organization
+ (CLI only).
+
+
+
+
+
+
+ Permissions
+
+
+ Use --skills (or MCP_SKILLS) to pick the
+ tool bundles you want to expose. Separate skill ids with commas.
+
+
+
+ --skills / MCP_SKILLS
+
+
+ Skills automatically grant the minimum scopes required by the
+ selected tools. You can combine any of the following ids:
+
+
+ );
+}
diff --git a/packages/mcp-cloudflare/src/client/components/ui/tool-actions.tsx b/packages/mcp-cloudflare/src/client/components/ui/tool-actions.tsx
new file mode 100644
index 000000000..4000e118d
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/components/ui/tool-actions.tsx
@@ -0,0 +1,34 @@
+/**
+ * Component for rendering a readable list of tools
+ */
+
+export interface ToolInfo {
+ name: string;
+ description: string;
+}
+
+interface ToolActionsProps {
+ tools: ToolInfo[];
+}
+
+export function ToolActions({ tools }: ToolActionsProps) {
+ if (!tools || tools.length === 0) return null;
+
+ return (
+
+
Tools
+
+ {tools.map((tool) => (
+
+
+ {tool.name}
+
+ {tool.description ? (
+
{tool.description}
+ ) : null}
+
+ ))}
+
+
+ );
+}
diff --git a/packages/mcp-cloudflare/src/client/components/ui/typewriter.tsx b/packages/mcp-cloudflare/src/client/components/ui/typewriter.tsx
new file mode 100644
index 000000000..0156a7525
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/components/ui/typewriter.tsx
@@ -0,0 +1,64 @@
+import { useState, useEffect, useRef } from "react";
+
+interface TypewriterProps {
+ text: string;
+ speed?: number;
+ children?: (displayedText: string) => React.ReactNode;
+ className?: string;
+ onComplete?: () => void;
+}
+
+export function Typewriter({
+ text,
+ speed = 30,
+ children,
+ className = "",
+ onComplete,
+}: TypewriterProps) {
+ const [displayedText, setDisplayedText] = useState("");
+ const [isComplete, setIsComplete] = useState(false);
+ const indexRef = useRef(0);
+ const previousTextRef = useRef("");
+
+ useEffect(() => {
+ // Reset if text has changed (new content streaming in)
+ if (text !== previousTextRef.current) {
+ const previousText = previousTextRef.current;
+
+ // Check if new text is an extension of the previous text
+ if (text.startsWith(previousText) && text.length > previousText.length) {
+ // Text got longer, continue from where we left off
+ indexRef.current = Math.max(displayedText.length, previousText.length);
+ } else {
+ // Text completely changed, restart
+ indexRef.current = 0;
+ setDisplayedText("");
+ setIsComplete(false);
+ }
+
+ previousTextRef.current = text;
+ }
+
+ if (indexRef.current >= text.length) {
+ if (!isComplete) {
+ setIsComplete(true);
+ onComplete?.();
+ }
+ return;
+ }
+
+ const timer = setTimeout(() => {
+ setDisplayedText(text.slice(0, indexRef.current + 1));
+ indexRef.current++;
+ }, speed);
+
+ return () => clearTimeout(timer);
+ }, [text, speed, displayedText.length, isComplete, onComplete]);
+
+ return (
+
+ {children ? children(displayedText) : displayedText}
+ {!isComplete && |}
+
+ );
+}
diff --git a/packages/mcp-cloudflare/src/client/contexts/auth-context.tsx b/packages/mcp-cloudflare/src/client/contexts/auth-context.tsx
new file mode 100644
index 000000000..7fe8f11f9
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/contexts/auth-context.tsx
@@ -0,0 +1,221 @@
+import {
+ createContext,
+ useContext,
+ useState,
+ useEffect,
+ useCallback,
+ useRef,
+ type ReactNode,
+} from "react";
+import type { AuthContextType } from "../components/chat/types";
+import {
+ isOAuthSuccessMessage,
+ isOAuthErrorMessage,
+} from "../components/chat/types";
+
+const POPUP_CHECK_INTERVAL = 1000;
+
+const AuthContext = createContext(undefined);
+
+interface AuthProviderProps {
+ children: ReactNode;
+}
+
+export function AuthProvider({ children }: AuthProviderProps) {
+ const [isLoading, setIsLoading] = useState(true);
+ const [isAuthenticated, setIsAuthenticated] = useState(false);
+ const [isAuthenticating, setIsAuthenticating] = useState(false);
+ const [authError, setAuthError] = useState("");
+
+ // Keep refs for cleanup
+ const popupRef = useRef(null);
+ const intervalRef = useRef(null);
+
+ // Check if authenticated by making a request to the server
+ useEffect(() => {
+ // Check authentication status
+ fetch("/api/auth/status", { credentials: "include" })
+ .then((res) => res.ok)
+ .then((authenticated) => {
+ setIsAuthenticated(authenticated);
+ setIsLoading(false);
+ })
+ .catch(() => {
+ setIsAuthenticated(false);
+ setIsLoading(false);
+ });
+ }, []);
+
+ // Process OAuth result from localStorage
+ const processOAuthResult = useCallback((data: unknown) => {
+ if (isOAuthSuccessMessage(data)) {
+ // Verify session on server before marking authenticated
+ fetch("/api/auth/status", { credentials: "include" })
+ .then((res) => res.ok)
+ .then((authenticated) => {
+ if (authenticated) {
+ // Fully reload the app to pick up new auth context/cookies
+ // This avoids intermediate/loading states and ensures a clean session
+ window.location.reload();
+ } else {
+ setIsAuthenticated(false);
+ setAuthError(
+ "Authentication not completed. Please finish sign-in.",
+ );
+ setIsAuthenticating(false);
+ }
+ })
+ .catch(() => {
+ setIsAuthenticated(false);
+ setAuthError("Failed to verify authentication.");
+ setIsAuthenticating(false);
+ });
+
+ // Cleanup interval and popup reference
+ if (intervalRef.current) {
+ clearInterval(intervalRef.current);
+ intervalRef.current = null;
+ }
+ if (popupRef.current) {
+ popupRef.current = null;
+ }
+ } else if (isOAuthErrorMessage(data)) {
+ setAuthError(data.error || "Authentication failed");
+ setIsAuthenticating(false);
+
+ // Cleanup interval and popup reference
+ if (intervalRef.current) {
+ clearInterval(intervalRef.current);
+ intervalRef.current = null;
+ }
+ if (popupRef.current) {
+ popupRef.current = null;
+ }
+ }
+ }, []);
+
+ // Cleanup on unmount
+ useEffect(() => {
+ return () => {
+ if (intervalRef.current) {
+ clearInterval(intervalRef.current);
+ }
+ };
+ }, []);
+
+ const handleOAuthLogin = useCallback(() => {
+ setIsAuthenticating(true);
+ setAuthError("");
+
+ const desiredWidth = Math.max(Math.min(window.screen.availWidth, 900), 600);
+ const desiredHeight = Math.min(window.screen.availHeight, 900);
+ const windowFeatures = `width=${desiredWidth},height=${desiredHeight},resizable=yes,scrollbars=yes`;
+
+ // Clear any stale results before opening popup
+ try {
+ localStorage.removeItem("oauth_result");
+ } catch {
+ // ignore storage errors
+ }
+
+ const popup = window.open(
+ "/api/auth/authorize",
+ "sentry-oauth",
+ windowFeatures,
+ );
+
+ if (!popup) {
+ setAuthError("Popup blocked. Please allow popups and try again.");
+ setIsAuthenticating(false);
+ return;
+ }
+
+ popupRef.current = popup;
+
+ // Poll for OAuth result in localStorage
+ // We don't check popup.closed as it's unreliable with cross-origin windows
+ intervalRef.current = window.setInterval(() => {
+ // Check localStorage for auth result
+ const storedResult = localStorage.getItem("oauth_result");
+ if (storedResult) {
+ try {
+ const result = JSON.parse(storedResult);
+ localStorage.removeItem("oauth_result");
+ processOAuthResult(result);
+
+ // Clear interval since we got a result
+ if (intervalRef.current) {
+ clearInterval(intervalRef.current);
+ intervalRef.current = null;
+ }
+ popupRef.current = null;
+ } catch (e) {
+ // Invalid stored result, continue polling
+ }
+ }
+ }, POPUP_CHECK_INTERVAL);
+
+ // Stop polling after 5 minutes (safety timeout)
+ setTimeout(() => {
+ if (intervalRef.current) {
+ clearInterval(intervalRef.current);
+ intervalRef.current = null;
+
+ // Final check if we're authenticated
+ fetch("/api/auth/status", { credentials: "include" })
+ .then((res) => res.ok)
+ .then((authenticated) => {
+ if (authenticated) {
+ window.location.reload();
+ } else {
+ setIsAuthenticating(false);
+ setAuthError("Authentication timed out. Please try again.");
+ }
+ })
+ .catch(() => {
+ setIsAuthenticating(false);
+ setAuthError("Authentication timed out. Please try again.");
+ });
+ }
+ }, 300000); // 5 minutes
+ }, [processOAuthResult]);
+
+ const handleLogout = useCallback(async () => {
+ try {
+ await fetch("/api/auth/logout", {
+ method: "POST",
+ credentials: "include",
+ });
+ } catch {
+ // Ignore errors, proceed with local logout
+ }
+
+ setIsAuthenticated(false);
+ }, []);
+
+ const clearAuthState = useCallback(() => {
+ setIsAuthenticated(false);
+ setAuthError("");
+ }, []);
+
+ const value: AuthContextType = {
+ isLoading,
+ isAuthenticated,
+ authToken: "", // Keep for backward compatibility
+ isAuthenticating,
+ authError,
+ handleOAuthLogin,
+ handleLogout,
+ clearAuthState,
+ };
+
+ return {children};
+}
+
+export function useAuth(): AuthContextType {
+ const context = useContext(AuthContext);
+ if (context === undefined) {
+ throw new Error("useAuth must be used within an AuthProvider");
+ }
+ return context;
+}
diff --git a/packages/mcp-cloudflare/src/client/flow.png b/packages/mcp-cloudflare/src/client/flow.png
deleted file mode 100644
index 22e98632d..000000000
Binary files a/packages/mcp-cloudflare/src/client/flow.png and /dev/null differ
diff --git a/packages/mcp-cloudflare/src/client/hooks/use-endpoint-mode.ts b/packages/mcp-cloudflare/src/client/hooks/use-endpoint-mode.ts
new file mode 100644
index 000000000..d46a77d9d
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/hooks/use-endpoint-mode.ts
@@ -0,0 +1,48 @@
+import { useState, useEffect } from "react";
+
+export type EndpointMode = "standard" | "agent";
+
+const STORAGE_KEY = "sentry-mcp-endpoint-mode";
+
+/**
+ * Hook to manage MCP endpoint mode preference.
+ * Toggles between "/mcp" (standard) and "/mcp?agent=1" (agent mode).
+ *
+ * The preference is persisted in localStorage.
+ */
+export function useEndpointMode() {
+ const [endpointMode, setEndpointModeState] = useState(() => {
+ // Initialize from localStorage on mount
+ if (typeof window !== "undefined") {
+ const stored = localStorage.getItem(STORAGE_KEY);
+ if (stored === "agent" || stored === "standard") {
+ return stored;
+ }
+ }
+ return "standard"; // Default to standard mode
+ });
+
+ // Persist to localStorage when changed
+ useEffect(() => {
+ if (typeof window !== "undefined") {
+ localStorage.setItem(STORAGE_KEY, endpointMode);
+ }
+ }, [endpointMode]);
+
+ const setEndpointMode = (mode: EndpointMode) => {
+ setEndpointModeState(mode);
+ };
+
+ const toggleEndpointMode = () => {
+ setEndpointModeState((prev) =>
+ prev === "standard" ? "agent" : "standard",
+ );
+ };
+
+ return {
+ endpointMode,
+ setEndpointMode,
+ toggleEndpointMode,
+ isAgentMode: endpointMode === "agent",
+ };
+}
diff --git a/packages/mcp-cloudflare/src/client/hooks/use-mcp-metadata.ts b/packages/mcp-cloudflare/src/client/hooks/use-mcp-metadata.ts
new file mode 100644
index 000000000..c5d6914d3
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/hooks/use-mcp-metadata.ts
@@ -0,0 +1,83 @@
+/**
+ * Custom hook to fetch and manage MCP metadata
+ *
+ * Provides immediate access to prompts and tools without waiting for chat stream
+ */
+import { useState, useEffect, useCallback } from "react";
+
+export interface McpMetadata {
+ type: "mcp-metadata";
+ prompts: Array<{
+ name: string;
+ description: string;
+ parameters: Record<
+ string,
+ {
+ type: string;
+ required: boolean;
+ description?: string;
+ }
+ >;
+ }>;
+ tools: string[];
+ resources?: Array<{
+ name: string;
+ description: string;
+ }>;
+ timestamp: string;
+}
+
+interface UseMcpMetadataResult {
+ metadata: McpMetadata | null;
+ isLoading: boolean;
+ error: string | null;
+ refetch: () => Promise;
+}
+
+export function useMcpMetadata(enabled = true): UseMcpMetadataResult {
+ const [metadata, setMetadata] = useState(null);
+ const [isLoading, setIsLoading] = useState(false);
+ const [error, setError] = useState(null);
+
+ const fetchMetadata = useCallback(async () => {
+ if (!enabled) {
+ return;
+ }
+
+ setIsLoading(true);
+ setError(null);
+
+ try {
+ const response = await fetch("/api/metadata", {
+ credentials: "include", // Include cookies
+ });
+
+ if (!response.ok) {
+ const errorData = await response.json().catch(() => ({}));
+ throw new Error(errorData.error || `HTTP ${response.status}`);
+ }
+
+ const data = await response.json();
+ setMetadata(data);
+ } catch (err) {
+ const errorMessage =
+ err instanceof Error ? err.message : "Failed to fetch metadata";
+ setError(errorMessage);
+ console.error("Failed to fetch MCP metadata:", err);
+ } finally {
+ setIsLoading(false);
+ }
+ }, [enabled]);
+
+ // Fetch metadata when auth token changes or component mounts
+ useEffect(() => {
+ fetchMetadata();
+ }, [fetchMetadata]);
+
+ return {
+ metadata,
+ isLoading,
+ error,
+ refetch: fetchMetadata,
+ };
+}
diff --git a/packages/mcp-cloudflare/src/client/hooks/use-persisted-chat.ts b/packages/mcp-cloudflare/src/client/hooks/use-persisted-chat.ts
new file mode 100644
index 000000000..6281f8260
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/hooks/use-persisted-chat.ts
@@ -0,0 +1,168 @@
+import { useCallback, useMemo } from "react";
+import type { Message } from "ai";
+
+const CHAT_STORAGE_KEY = "sentry_chat_messages";
+const TIMESTAMP_STORAGE_KEY = "sentry_chat_timestamp";
+const MAX_STORED_MESSAGES = 100; // Limit storage size
+const CACHE_EXPIRY_MS = 60 * 60 * 1000; // 1 hour in milliseconds
+
+export function usePersistedChat(isAuthenticated: boolean) {
+ // Check if cache is expired
+ const isCacheExpired = useCallback(() => {
+ try {
+ const timestampStr = localStorage.getItem(TIMESTAMP_STORAGE_KEY);
+ if (!timestampStr) return true;
+
+ const timestamp = Number.parseInt(timestampStr, 10);
+ const now = Date.now();
+ return now - timestamp > CACHE_EXPIRY_MS;
+ } catch {
+ return true;
+ }
+ }, []);
+
+ // Update timestamp to extend cache expiry
+ const updateTimestamp = useCallback(() => {
+ try {
+ localStorage.setItem(TIMESTAMP_STORAGE_KEY, Date.now().toString());
+ } catch (error) {
+ console.error("Failed to update chat timestamp:", error);
+ }
+ }, []);
+
+ // Validate a message to ensure it won't cause conversion errors
+ const isValidMessage = useCallback((msg: Message): boolean => {
+ // Check if message has parts (newer structure)
+ if (msg.parts && Array.isArray(msg.parts)) {
+ // Check each part for validity
+ return msg.parts.every((part) => {
+ // Text parts are always valid
+ if (part.type === "text") {
+ return true;
+ }
+
+ // Tool invocation parts must be complete (have result) if state is "call" or "result"
+ if (part.type === "tool-invocation") {
+ const invocation = part as any;
+ // If it's in "call" or "result" state, it must have a result
+ if (invocation.state === "call" || invocation.state === "result") {
+ const content = invocation.result?.content;
+ // Ensure content exists and is not an empty array
+ return (
+ content && (Array.isArray(content) ? content.length > 0 : true)
+ );
+ }
+ // partial-call state is okay without result
+ return true;
+ }
+
+ // Other part types are assumed valid
+ return true;
+ });
+ }
+
+ // Check if message has content (legacy structure)
+ if (msg.content && typeof msg.content === "string") {
+ return msg.content.trim() !== "";
+ }
+
+ return false;
+ }, []);
+
+ // Load initial messages from localStorage
+ const initialMessages = useMemo(() => {
+ if (!isAuthenticated) return [];
+
+ // Check if cache is expired
+ if (isCacheExpired()) {
+ // Clear expired data
+ localStorage.removeItem(CHAT_STORAGE_KEY);
+ localStorage.removeItem(TIMESTAMP_STORAGE_KEY);
+ return [];
+ }
+
+ try {
+ const stored = localStorage.getItem(CHAT_STORAGE_KEY);
+ if (stored) {
+ const parsed = JSON.parse(stored) as Message[];
+ // Validate the data structure
+ if (Array.isArray(parsed) && parsed.length > 0) {
+ // Filter out any invalid or incomplete messages
+ const validMessages = parsed.filter(isValidMessage);
+ if (validMessages.length > 0) {
+ // Update timestamp since we're loading existing messages
+ updateTimestamp();
+ return validMessages;
+ }
+ }
+ }
+ } catch (error) {
+ console.error("Failed to load chat history:", error);
+ // Clear corrupted data
+ localStorage.removeItem(CHAT_STORAGE_KEY);
+ localStorage.removeItem(TIMESTAMP_STORAGE_KEY);
+ }
+
+ return [];
+ }, [isAuthenticated, isCacheExpired, updateTimestamp, isValidMessage]);
+
+ // Function to save messages
+ const saveMessages = useCallback(
+ (messages: Message[]) => {
+ if (!isAuthenticated || messages.length === 0) return;
+
+ try {
+ // Filter out invalid messages before storing
+ const validMessages = messages.filter(isValidMessage);
+
+ // Only store the most recent valid messages to avoid storage limits
+ const messagesToStore = validMessages.slice(-MAX_STORED_MESSAGES);
+
+ // Don't save if there are no valid messages
+ if (messagesToStore.length === 0) {
+ localStorage.removeItem(CHAT_STORAGE_KEY);
+ localStorage.removeItem(TIMESTAMP_STORAGE_KEY);
+ return;
+ }
+
+ localStorage.setItem(CHAT_STORAGE_KEY, JSON.stringify(messagesToStore));
+ // Update timestamp when saving messages (extends expiry)
+ updateTimestamp();
+ } catch (error) {
+ console.error("Failed to save chat history:", error);
+ // If we hit storage quota, try to clear old messages
+ if (
+ error instanceof DOMException &&
+ error.name === "QuotaExceededError"
+ ) {
+ try {
+ const validMessages = messages.filter(isValidMessage);
+ const recentMessages = validMessages.slice(-50); // Keep only last 50
+ localStorage.setItem(
+ CHAT_STORAGE_KEY,
+ JSON.stringify(recentMessages),
+ );
+ updateTimestamp();
+ } catch {
+ // If still failing, clear the storage
+ localStorage.removeItem(CHAT_STORAGE_KEY);
+ localStorage.removeItem(TIMESTAMP_STORAGE_KEY);
+ }
+ }
+ }
+ },
+ [isAuthenticated, updateTimestamp, isValidMessage],
+ );
+
+ // Clear persisted messages
+ const clearPersistedMessages = useCallback(() => {
+ localStorage.removeItem(CHAT_STORAGE_KEY);
+ localStorage.removeItem(TIMESTAMP_STORAGE_KEY);
+ }, []);
+
+ return {
+ initialMessages,
+ saveMessages,
+ clearPersistedMessages,
+ };
+}
diff --git a/packages/mcp-cloudflare/src/client/hooks/use-scroll-lock.ts b/packages/mcp-cloudflare/src/client/hooks/use-scroll-lock.ts
new file mode 100644
index 000000000..fb20fc35c
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/hooks/use-scroll-lock.ts
@@ -0,0 +1,77 @@
+/**
+ * Hook to lock body scroll when a component is active
+ * Handles edge cases like iOS Safari and nested locks
+ */
+
+import { useEffect, useRef } from "react";
+
+// Track active locks to handle nested components
+let activeLocks = 0;
+let originalStyles: {
+ overflow?: string;
+ position?: string;
+ top?: string;
+ width?: string;
+} = {};
+
+export function useScrollLock(enabled = true) {
+ const scrollPositionRef = useRef(0);
+
+ useEffect(() => {
+ if (!enabled) return;
+
+ // Save scroll position and lock scroll
+ const lockScroll = () => {
+ // First lock - save original styles
+ if (activeLocks === 0) {
+ scrollPositionRef.current = window.scrollY;
+
+ originalStyles = {
+ overflow: document.body.style.overflow,
+ position: document.body.style.position,
+ top: document.body.style.top,
+ width: document.body.style.width,
+ };
+
+ // Apply scroll lock styles
+ document.body.style.overflow = "hidden";
+
+ // iOS Safari fix - prevent rubber band scrolling
+ if (/iPad|iPhone|iPod/.test(navigator.userAgent)) {
+ document.body.style.position = "fixed";
+ document.body.style.top = `-${scrollPositionRef.current}px`;
+ document.body.style.width = "100%";
+ }
+ }
+
+ activeLocks++;
+ };
+
+ // Restore scroll position and unlock
+ const unlockScroll = () => {
+ activeLocks--;
+
+ // Last lock removed - restore original styles
+ if (activeLocks === 0) {
+ document.body.style.overflow = originalStyles.overflow || "";
+ document.body.style.position = originalStyles.position || "";
+ document.body.style.top = originalStyles.top || "";
+ document.body.style.width = originalStyles.width || "";
+
+ // Restore scroll position for iOS
+ if (/iPad|iPhone|iPod/.test(navigator.userAgent)) {
+ window.scrollTo(0, scrollPositionRef.current);
+ }
+
+ originalStyles = {};
+ }
+ };
+
+ lockScroll();
+
+ // Cleanup
+ return () => {
+ unlockScroll();
+ };
+ }, [enabled]);
+}
diff --git a/packages/mcp-cloudflare/src/client/hooks/use-streaming-simulation.ts b/packages/mcp-cloudflare/src/client/hooks/use-streaming-simulation.ts
new file mode 100644
index 000000000..d44fbd0d1
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/hooks/use-streaming-simulation.ts
@@ -0,0 +1,77 @@
+/**
+ * Hook for simulating streaming animation for local messages (like slash commands)
+ * This provides the same UX as AI-generated responses for locally generated content
+ */
+import { useState, useCallback, useRef, useEffect } from "react";
+
+interface StreamingSimulationState {
+ isStreaming: boolean;
+ streamingMessageId: string | null;
+}
+
+export function useStreamingSimulation() {
+ const [state, setState] = useState({
+ isStreaming: false,
+ streamingMessageId: null,
+ });
+
+ const timeoutRef = useRef | null>(null);
+
+ // Start streaming simulation for a specific message
+ const startStreaming = useCallback((messageId: string, duration = 1000) => {
+ setState({
+ isStreaming: true,
+ streamingMessageId: messageId,
+ });
+
+ // Clear any existing timeout
+ if (timeoutRef.current) {
+ clearTimeout(timeoutRef.current);
+ }
+
+ // Stop streaming after the specified duration
+ timeoutRef.current = setTimeout(() => {
+ setState({
+ isStreaming: false,
+ streamingMessageId: null,
+ });
+ }, duration);
+ }, []);
+
+ // Stop streaming simulation immediately
+ const stopStreaming = useCallback(() => {
+ if (timeoutRef.current) {
+ clearTimeout(timeoutRef.current);
+ timeoutRef.current = null;
+ }
+ setState({
+ isStreaming: false,
+ streamingMessageId: null,
+ });
+ }, []);
+
+ // Check if a specific message is currently streaming
+ const isMessageStreaming = useCallback(
+ (messageId: string) => {
+ return state.isStreaming && state.streamingMessageId === messageId;
+ },
+ [state.isStreaming, state.streamingMessageId],
+ );
+
+ // Cleanup on unmount
+ useEffect(() => {
+ return () => {
+ if (timeoutRef.current) {
+ clearTimeout(timeoutRef.current);
+ }
+ };
+ }, []);
+
+ return {
+ isStreaming: state.isStreaming,
+ streamingMessageId: state.streamingMessageId,
+ startStreaming,
+ stopStreaming,
+ isMessageStreaming,
+ };
+}
diff --git a/packages/mcp-cloudflare/src/client/index.css b/packages/mcp-cloudflare/src/client/index.css
index 6ee0ddf5a..424ebafe2 100644
--- a/packages/mcp-cloudflare/src/client/index.css
+++ b/packages/mcp-cloudflare/src/client/index.css
@@ -5,7 +5,7 @@
@plugin "@tailwindcss/typography";
:root {
- --background: oklch(0.145 0 0);
+ --background: oklch(0.13 0.028 261.692);
--foreground: oklch(0.985 0 0);
--card: oklch(0.145 0 0);
--card-foreground: oklch(0.985 0 0);
@@ -83,12 +83,7 @@
}
body {
- @apply bg-background text-foreground;
- background: linear-gradient(
- oklch(0.13 0.028 261.692) 0%,
- oklch(0.21 0.034 264.665) 50%,
- oklch(0.13 0.028 261.692) 100%
- );
+ @apply bg-background text-foreground bg-gradient-to-br from-slate-950 via-slate-900 to-slate-950;
min-height: 100vh;
}
diff --git a/packages/mcp-cloudflare/src/client/instrument.ts b/packages/mcp-cloudflare/src/client/instrument.ts
index d628e0abf..abd303a93 100644
--- a/packages/mcp-cloudflare/src/client/instrument.ts
+++ b/packages/mcp-cloudflare/src/client/instrument.ts
@@ -1,9 +1,11 @@
import * as Sentry from "@sentry/react";
+import { sentryBeforeSend } from "@sentry/mcp-server/telem/sentry";
Sentry.init({
dsn: import.meta.env.VITE_SENTRY_DSN,
sendDefaultPii: true,
tracesSampleRate: 1,
+ beforeSend: sentryBeforeSend,
environment:
import.meta.env.VITE_SENTRY_ENVIRONMENT ?? import.meta.env.NODE_ENV,
});
diff --git a/packages/mcp-cloudflare/src/client/main.tsx b/packages/mcp-cloudflare/src/client/main.tsx
index 207774612..7ea1d1ef1 100644
--- a/packages/mcp-cloudflare/src/client/main.tsx
+++ b/packages/mcp-cloudflare/src/client/main.tsx
@@ -3,7 +3,8 @@ import "./instrument";
import { StrictMode } from "react";
import { createRoot } from "react-dom/client";
import "./index.css";
-import App from "./App";
+import App from "./app";
+import { AuthProvider } from "./contexts/auth-context";
import * as Sentry from "@sentry/react";
const container = document.getElementById("root");
@@ -21,6 +22,8 @@ const root = createRoot(container!, {
root.render(
-
+
+
+ ,
);
diff --git a/packages/mcp-cloudflare/src/client/pages/home.tsx b/packages/mcp-cloudflare/src/client/pages/home.tsx
new file mode 100644
index 000000000..ab6c6a998
--- /dev/null
+++ b/packages/mcp-cloudflare/src/client/pages/home.tsx
@@ -0,0 +1,211 @@
+import TOOL_DEFINITIONS from "@sentry/mcp-server/toolDefinitions";
+import { Link } from "../components/ui/base";
+import {
+ Accordion,
+ AccordionContent,
+ AccordionItem,
+ AccordionTrigger,
+} from "../components/ui/accordion";
+import Note from "../components/ui/note";
+import { Sparkles } from "lucide-react";
+import { Button } from "../components/ui/button";
+import RemoteSetup from "../components/fragments/remote-setup";
+import { useState } from "react";
+import StdioSetup from "../components/fragments/stdio-setup";
+import Section from "../components/ui/section";
+import { Prose } from "../components/ui/prose";
+import JsonSchemaParams from "../components/ui/json-schema-params";
+
+interface HomeProps {
+ onChatClick: () => void;
+}
+
+export default function Home({ onChatClick }: HomeProps) {
+ const [stdio, setStdio] = useState(false);
+
+ return (
+
+
+
+
+
+
+ This service implements the Model Context Protocol (MCP) for
+ interacting with Sentry,
+ focused on human-in-the-loop coding agents and developer workflows
+ rather than general-purpose API access.
+
+
+
+ {/* Big Call to Action - Mobile Only */}
+
+
+
+
+ Chat with your stack traces. Argue with confidence. Lose
+ gracefully.
+
+
+
+
+ Ask: "What are my recent issues?"
+
+
+
+
+
+
+
+
+ Simply put, it's a way to plug Sentry's API into an LLM, letting
+ you ask questions about your data in context of the LLM itself.
+ This lets you take a coding agent that you already use, like
+ Cursor or Claude Code, and pull in additional information from
+ Sentry to help with tasks like debugging, fixing production
+ errors, and understanding your application's behavior.
+
+
+ This project is still in its infancy as development of the MCP
+ specification is ongoing. If you find any problems, or have an
+ idea for how we can improve it, please let us know on{" "}
+
+ GitHub
+
+
+
Interested in learning more?
+
+
+
+ Using Sentry's Seer via MCP
+
+
+
+
+ Building Sentry's MCP on Cloudflare
+
+
+
+
+
+
+
+
Getting Started
+
+
+
+
+ >
+ }
+ >
+
+ {!stdio ? (
+
+
+
+ ) : (
+
+
+
+ )}
+
+
+
+
+
+
+
+ Tools are pre-configured functions that can be used to help with
+ common tasks.
+
+
+
+ Note: Any tool that takes an{" "}
+ organization_slug parameter will try to infer a default
+ organization, otherwise you should mention it in the prompt.
+
+
+ {TOOL_DEFINITIONS.sort((a, b) => a.name.localeCompare(b.name)).map(
+ (tool) => (
+
+
+ {tool.name}
+
+
+
+