Migrate plugin config from config.json to database#503
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds DB-backed plugin config persistence with migrations, admin HTTP endpoints to read/save config, cluster-wide config synchronization, server activation migration/runner, frontend/admin UI wiring to new endpoints, and tests for store, API, migrations, and E2E flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as HTTP Client
participant API as Admin API<br/>handleGetConfig
participant Store as Config Store
Client->>API: GET /admin/config
API->>Store: GetConfig()
alt Config exists
Store-->>API: Config JSON
else No config
Store-->>API: nil
end
alt Retrieved config is nil
API->>API: Use default Config
else Retrieved config exists
API->>API: Use retrieved Config
end
API-->>Client: HTTP 200 (Config JSON)
sequenceDiagram
participant Client as HTTP Client
participant API as Admin API<br/>handleSaveConfig
participant Store as Config Store
participant Updater as ConfigUpdater
participant Notifier as ClusterNotifier
Client->>API: PUT /admin/config (Config JSON)
API->>API: Parse JSON -> cfg
API->>Store: SaveConfig(cfg)
Store->>Store: Acquire pg_advisory_xact_lock
Store->>Store: Deactivate old active row
Store->>Store: Insert new active row
Store-->>API: Commit success
API->>Updater: Update(&cfg)
API->>Notifier: PublishConfigUpdate()
Notifier-->>API: Success
API-->>Client: HTTP 200
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
🤖 LLM Evaluation ResultsOpenAI✅ Overall: 19/19 tests passed (100.0%)
Anthropic
❌ Failed EvaluationsShow 1 failuresANTHROPIC1. TestReactEval/[anthropic]_react_cat_message
This comment was automatically generated by the eval CI pipeline. |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@store/migrate.go`:
- Around line 48-52: The current morph.New call sets
SetStatementTimeoutInSeconds(100000) which is excessively large; change the
timeout to a reasonable value (e.g., 300–600 seconds) or, if you truly need no
timeout, use a negative value to explicitly disable it. Locate the morph.New
invocation (the call that uses morph.WithLock("agents-plugin-lock-key") and
morph.SetMigrationTableName("Agents_DB_Migrations")) and replace
SetStatementTimeoutInSeconds(100000) with SetStatementTimeoutInSeconds(300) or
SetStatementTimeoutInSeconds(600) (or SetStatementTimeoutInSeconds(-1) to
disable).
In `@store/migrations/000003_create_config_history_table.up.sql`:
- Around line 1-8: Replace the non-unique index on Agents_ConfigHistory with a
partial unique index to enforce that only one row can have Active = true: drop
the existing idx_agents_confighistory_active and create a UNIQUE INDEX on
Agents_ConfigHistory(Active) WHERE Active = true (so the DB guarantees a single
active config), and update the down migration to DROP that unique partial index
(use the same index name you create) so rollbacks remove the correct index;
ensure you reference the table name Agents_ConfigHistory and the index name
idx_agents_confighistory_active when making these changes.
In `@webapp/src/client.tsx`:
- Around line 488-522: Replace the loose any types by introducing a PluginConfig
interface describing the plugin configuration shape and use it in both function
signatures and return types: change getPluginConfig(): Promise<any> to
getPluginConfig(): Promise<PluginConfig> and savePluginConfig(config: any) to
savePluginConfig(config: PluginConfig); ensure the JSON.parse/response.json()
result is typed as PluginConfig (or validated/casted) and JSON.stringify(config)
accepts PluginConfig; add the new PluginConfig type definition near the top of
this module or in a shared types file and export/import it as needed so callers
get proper typing.
In `@webapp/src/components/system_console/config.tsx`:
- Around line 181-194: The effect using useEffect contains the async loadConfig
function which references intl (and intl.formatMessage) but intl is not listed
in the dependency array; update the useEffect dependencies to include intl so
the effect re-runs when locale/messages change (i.e., change the
useEffect([...]) to include intl), ensuring loadConfig (which calls
getPluginConfig, setLocalConfig and setLoadError) uses the current intl
instance; alternatively wrap any intl-using callbacks in useCallback/useRef, but
the simplest fix is adding intl to the dependency array of the useEffect that
defines loadConfig.
🧹 Nitpick comments (5)
config/legacy_migrations.go (1)
165-169: Ensure deterministic service ordering after dedupe.Map iteration order is random, which can lead to noisy config diffs or unstable snapshots. Consider sorting the slice after building it (add a
sortimport).♻️ Proposed adjustment for stable ordering
existingConfig.Services = make([]llm.ServiceConfig, 0, len(serviceMap)) for _, svc := range serviceMap { existingConfig.Services = append(existingConfig.Services, svc) } +sort.Slice(existingConfig.Services, func(i, j int) bool { + return existingConfig.Services[i].ID < existingConfig.Services[j].ID +})webapp/src/components/system_console/config.tsx (2)
197-211: UseforEachinstead ofmapfor side effects.Line 201 uses
.map()but discards the return value. UseforEachwhen the intent is to perform side effects.Suggested fix
- Object.keys(avatarUpdates).map((username: string) => setUserProfilePictureByUsername(username, avatarUpdates[username])); + Object.keys(avatarUpdates).forEach((username: string) => setUserProfilePictureByUsername(username, avatarUpdates[username]));
79-94: Consider using theme colours for ErrorContainer.The error container uses hardcoded colours. For better consistency with Mattermost's design system, consider using theme variables if available.
server/main.go (2)
96-107: Consider using defer for mutex unlock.The current pattern of explicit unlocks on each error path is error-prone during future modifications. Using defer would be safer.
Suggested refactor
mtx, err := cluster.NewMutex(p.API, "ai_db_migrations") if err != nil { return fmt.Errorf("failed to create cluster mutex: %w", err) } mtx.Lock() + defer mtx.Unlock() if migrateErr := p.store.RunMigrations(); migrateErr != nil { - mtx.Unlock() return fmt.Errorf("failed to run database migrations: %w", migrateErr) } - mtx.Unlock()
121-166: Same defer pattern recommended for config migration mutex.The config migration section has multiple exit points with explicit unlocks. Using defer would simplify this and prevent potential deadlocks if new error paths are added.
Suggested refactor
mtx2, err := cluster.NewMutex(p.API, "ai_config_migration") if err != nil { return fmt.Errorf("failed to create config migration mutex: %w", err) } mtx2.Lock() + defer mtx2.Unlock() migrated, err := p.store.IsConfigMigrated() if err != nil { - mtx2.Unlock() return fmt.Errorf("failed to check config migration status: %w", err) } if !migrated { // Load from config.json var cfgWrap configuration if loadErr := p.API.LoadPluginConfiguration(&cfgWrap); loadErr != nil { - mtx2.Unlock() return fmt.Errorf("failed to load plugin configuration for migration: %w", loadErr) } // ... rest of migration code ... if migrationErr != nil { - mtx2.Unlock() return fmt.Errorf("failed to run legacy config migrations: %w", migrationErr) } // ... save to DB ... if saveErr := p.store.SaveConfig(finalCfg); saveErr != nil { - mtx2.Unlock() return fmt.Errorf("failed to save migrated config to database: %w", saveErr) } pluginAPI.Log.Info("Config migrated from config.json to database") } - mtx2.Unlock()
|
/update-branch |
fcfe622 to
37a7da1
Compare
There was a problem hiding this comment.
Pull request overview
This pull request implements a comprehensive migration of plugin configuration from Mattermost's config.json to direct database storage, along with introducing a multi-stage tool approval system for channel mentions. The changes span backend infrastructure, API endpoints, and frontend UI components.
Changes:
- Migrates plugin configuration from file-based (
config.json) to database-backed storage using Morph migrations - Introduces multiplayer tool calling with two-stage approval (call stage → result stage) in public channels
- Adds admin API endpoints (
GET/PUT /admin/config) with cluster event propagation for HA environments - Implements tool call redaction in channels while keeping full details visible to the invoker
Reviewed changes
Copilot reviewed 53 out of 55 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| store/ | New package with Morph-based schema migrations and config persistence |
| config/legacy_migrations.go | Refactored into pure functions for one-time data migration |
| server/main.go | Updated activation to run DB migrations and migrate config.json → DB |
| server/cluster_events.go | New cluster event handlers for config synchronization |
| api/api_config.go | New admin endpoints for config management |
| api/api_post.go | New tool calling endpoints with privacy controls |
| streaming/streaming.go | Tool call redaction logic for channel mentions |
| conversations/tool_handling.go | Two-stage tool approval workflow implementation |
| webapp/src/components/tool_card.tsx | Enhanced UI for two-stage approvals with Share/Keep Private buttons |
| webapp/src/components/system_console/config.tsx | Updated to use plugin API instead of Mattermost config |
| e2e/tests/multiplayer-tool-calling/ | Comprehensive E2E tests for the tool calling feature |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
* Integrate Bifrost LLM gateway library for unified provider access Replace individual LLM provider implementations (OpenAI, Anthropic, Azure, Bedrock, Cohere, Mistral) with the Bifrost gateway library for chat completions. Key changes: - Add new bifrost/ package implementing llm.LanguageModel interface - BifrostLLM adapter handles message conversion, tool calling, and streaming - Maps ServiceConfig to Bifrost provider configuration - Update bots/bots.go to use Bifrost for supported providers - Keep asage provider as fallback for unsupported types - Retain openai package for transcription and embeddings (not covered by Bifrost) - Retain anthropic/bedrock packages for FetchModels and evals This provides a unified API for multiple LLM providers while maintaining backwards compatibility with existing configurations. * Add Bifrost support for embeddings and transcription Extend Bifrost integration to cover additional LLM capabilities: Embeddings: - Add BifrostEmbeddingProvider implementing embeddings.EmbeddingProvider interface - Support CreateEmbedding and BatchCreateEmbeddings via Bifrost gateway - Add "bifrost" provider type to embeddings configuration - Support OpenAI, Azure, Cohere, and Bedrock providers for embeddings Transcription: - Add BifrostTranscriber for audio-to-text conversion via Bifrost - Support VTT format output compatible with existing subtitles parsing - Configurable model and provider selection This provides unified access to embedding and transcription APIs through the Bifrost gateway, allowing users to choose their preferred provider while maintaining a consistent interface. * Remove legacy provider packages in favor of Bifrost This commit completes the migration to using Bifrost as the unified LLM gateway by: - Deleting openai/, anthropic/, bedrock/, and asage/ packages - Removing ASage service type support from configuration - Updating bots.go to use Bifrost exclusively for LLM and transcription - Updating search/embeddings.go to use Bifrost for embedding providers - Fixing type assignment issues in getLLM with interface types - Using service DefaultModel for transcription (fallback to whisper-1) - Cleaning up config.go by removing OpenAI-specific helper functions All LLM providers are now accessed through the Bifrost gateway library, providing a consistent interface across OpenAI, Anthropic, Azure, OpenAI-compatible, and Bedrock providers. * Improve Bifrost configuration to use built-in features - Use NetworkConfig.BaseURL for OpenAI Compatible and other providers that support custom API endpoints (Azure uses AzureKeyConfig instead) - Pass OpenAI OrgID via NetworkConfig.ExtraHeaders - Configure built-in retry logic (2 retries with 1s-10s backoff) - Remove unused httpClient from providerAccount struct (Bifrost manages its own HTTP client internally) * Update Bifrost integration for newer library API * Responses API * Fix CI * Mod tidy * Add reasoning and annotation streaming to Bifrost responses * Restore missing functionality in Bifrost migration - Add auto-run tool resolution wrapper with recursive loop - Honor ReasoningDisabled per-request config - Route native web search through Responses API for all providers - Stream reasoning/thinking events in chat API path - Restore Anthropic thinking budget auto-calculation - Preserve thinking blocks in conversation history - Merge consecutive same-role messages for Anthropic - Fix transcription model to always use whisper-1 - Pass embedding dimensions in requests - Remove unused httpClient parameter from Bifrost * Fix review issues in Bifrost migration * /v1 issue * Fix remaining e2e failures: reasoning summaries, mock model, and embedding URL - Set reasoning.Summary = "auto" for OpenAI Responses API so reasoning summary events are emitted in the stream (matches old SDK behavior) - Add defaultModel to mock service configs (Bifrost requires it) - Pass APIURL for OpenAI embedding provider for consistency - Normalize embedding provider URLs to strip /v1 suffix * Fix some time issues * Fix e2e flake mode * Lint * Annotations fixed * Fix for Anthropic annotations --------- Co-authored-by: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 30 out of 32 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Phase 1 of config-to-database migration. Introduces: - store/ package with Store type wrapping sqlx.DB + Squirrel builder - Morph-based schema migrations with embedded SQL files - Agents_System key-value table for plugin metadata - LLM_PostMeta table (now Morph-managed with IF NOT EXISTS) - Agents_ConfigHistory table for config versioning - Two-layer HA safety: cluster mutex + Morph advisory lock - Integration tests using testcontainers-go (real PostgreSQL) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 2 of config-to-database migration. Introduces: - store/config.go: GetConfig, SaveConfig, IsConfigMigrated with transaction-safe history tracking in Agents_ConfigHistory - config/legacy_migrations.go: Pure config transforms moved from server/migrations.go (no plugin API dependency) - One-time config data migration in OnActivate: reads config.json, applies legacy transforms, writes to DB, loads into memory - OnConfigurationChange becomes no-op (config now flows through DB) - Deletes server/migrations.go and server/migrations_test.go Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 3 of config-to-database migration. Introduces: - GET/PUT /admin/config endpoints for system console - ConfigStore, ConfigUpdater, ClusterNotifier interfaces in api package - Cluster event propagation for cross-node config updates - Webapp system console rewrite: local state + plugin API instead of props.value/props.onChange through Mattermost config flow - Config API helpers in client.tsx Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 4 (final) of config-to-database migration. Removes: - database/schema.go: Table creation now handled by Morph migrations - OnConfigurationChange hook: Config updates now flow exclusively through PUT /admin/config -> DB -> cluster event -> all nodes reload Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Reduce Morph statement timeout from 100000s to 300s - Replace non-unique index with partial unique index on Agents_ConfigHistory(Active) WHERE Active = true to enforce single active config row at the DB level - Extract PluginConfig type to shared plugin_config_types.tsx, replace any types in client.tsx config API helpers - Add intl to useEffect dependency array in config.tsx Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Enable tools for channel mentions Allow @mention flows to keep tools enabled while preserving DM-only defaults for other non-DM paths. * Redact tool calls in channel streams Store full tool calls in KV and send only redacted payloads to avoid leaking arguments in non-DM channels. * Store channel tool results privately Use KV-backed tool call data in channels, persist results privately, update redacted post status, and pause before LLM continuation. * Add private tool approval endpoints Expose requester-only endpoints for private tool call/result retrieval and result approval submission. * Continue after tool result approval Load private tool results for channel approvals, inject accepted results for LLM continuation, clear pending flags, and clean up KV. * Add two-stage tool approval UI Fetch private tool call/results for requesters, hide sensitive details for others, and route approvals to call or result stages. * Test channel tool approvals and redaction Add unit coverage for channel tool call redaction, KV storage, and requester-only access to private endpoints. * UX fixes * fix trashing * Fix linter * fix linter * UX fixes * Add tool permissions helper functions for channel-based tool access Adds utility functions to manage tool availability in channels: - allowToolsInChannelFromPost: Check if tools are allowed from post props - setAllowToolsInChannelProp: Set tool allowance prop on posts - applyToolAvailability: Apply tool availability logic to context These support the feature that allows tools to be used in channels when explicitly enabled, rather than being limited to DMs only. * Stabilize citation selectors in e2e tests. Add stable data-testid hooks and retryable citation waits to reduce flakes in real-api runs. * Add channel-based tool approval flow with private KV storage - Add two-stage tool approval for channel conversations (call approval then result approval) - Store private tool call arguments and results in KV store for requester-only access - Add API endpoints for fetching private tool calls and results - Use centralized mmapi.IsKVNotFound helper for consistent KV error handling - Clean up KV entries after tool flow completion - Add UI components for tool approval and result review in channels * Fix notifications for bots because of tool call chains * Address CodeRabbit * Fix field name references after merge: c.config -> c.configProvider * fix linter * Fix debug-panel e2e test radio indices after new setting added Update radio button indices in debug-panel.spec.ts to account for the new "Enable Channel Mention Tool Calling" setting merged from master. Also fix ESLint lines-around-comment violations in llmbot_post.tsx. * When tool result is approved, move it from redacted KVStore to post props to preserve future context. * Add e2e tests * Refactor tool_handling.go * Add e2e tests to CI pipelines * Address Cursor feedback * Add a few suggested fixes around connection pools. don't show regenerate in channel context
- Add intl, props.registerSaveAction, props.unRegisterSaveAction to useEffect dependency array to prevent stale closures - Replace .map() with .forEach() for avatar upload side effects Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…504) The IsDMWith checks in the 4 tool-related API handlers used bot.GetMMBot().UserId from the botUsername query parameter, which could resolve to the wrong bot. When the DM was with a different bot, the check returned false and the config guard incorrectly blocked the request with HTTP 403. Use post.UserId instead, which is always the bot that created the post in the conversation. This is consistent with how HandleToolCall in tool_handling.go already determines the bot. Also strengthen the disabled-tools system prompt to prevent LLMs from hallucinating tool call XML when tools are listed but unavailable in channel contexts. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With OnConfigurationChange() removed for database-config, mmctl config patch no longer propagates config to the plugin. Switch installPlugin to use the plugin's admin API (PUT /admin/config) to set config after enabling the plugin, fixing all failing real-API E2E pipelines. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Normalize empty config: ensure bots/services are arrays when API returns null - Update verifyBotInDatabase and getBotFromDatabase to query Agents_ConfigHistory instead of pluginkeyvaluestore (config is now stored in DB, not PluginSettings) Fixes 15 failing bot-configuration e2e tests (Plugin mattermost-ai configuration not found) Co-authored-by: Cursor <cursoragent@cursor.com>
ab7e7a7 to
4a9d7eb
Compare
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 30 out of 32 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@coderabbitai resolve |
✅ Actions performedComments resolved and changes approved. |
Add PluginAdminConfigApi and PluginRoutesApi for GET/PUT admin config and authenticated plugin routes, and refactor BotConfigHelper, ToolConfigAPIHelper, and MattermostContainer.setPluginConfig to use them instead of ad hoc fetch. ToolConfigAPIHelper user-facing methods no longer take a redundant baseUrl. Stabilize service-management login (optional longer channel_view wait) and real-api auto-run policy (explicit town-square channel_id, flexible completion assertion). Made-with: Cursor
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (55 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2). No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to ensure exactly one active config. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex. Properly serialized.Secrets handling: API keys are stored and returned in plaintext in
Agents_ConfigHistory, consistent with the prior approach through Mattermost'sLoadPluginConfiguration/SavePluginConfig. The admin-only access restriction is preserved.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write error details to the response body. Errors are added toc.Errorsand logged server-side by theginloggermiddleware.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this extends MCP auto-approval to DMs. This is correct — DMs already have tools enabled and auto-run support viaappendDMAutoRunOptions, and thetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing. The change makes DMs consistent with the existing tool execution path.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under cluster mutex. No user-facing attack surface. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation.Webapp changes: System console config now uses dedicated plugin admin API endpoints (
/plugins/mattermost-ai/admin/config) with the existing Mattermost session token viaClient4.getOptions(). No change in auth boundary.No medium, high, or critical vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
There was a problem hiding this comment.
🧹 Nitpick comments (3)
e2e/helpers/mm.ts (1)
162-165: Consider updating the legacy function signature for feature parity.The legacy
loginwrapper doesn't forward theoptionsparameter, so callers using this function cannot leverage the configurable timeout. This is acceptable if the goal is to keep the legacy interface frozen, but you might want to pass throughoptionsfor consistency.♻️ Optional: forward the options parameter
-export const login = async (page: Page, url: string, username: string, password: string) => { +export const login = async ( + page: Page, + url: string, + username: string, + password: string, + options?: { channelViewTimeoutMs?: number }, +) => { const mmPage = new MattermostPage(page); - await mmPage.login(url, username, password); + await mmPage.login(url, username, password, options); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/mm.ts` around lines 162 - 165, The legacy wrapper function login should accept and forward an optional options parameter to maintain feature parity with MattermostPage.login; update the signature of export const login to include options (e.g., options?: LoginOptions) and pass that options argument through when calling mmPage.login(url, username, password, options) so callers can specify timeout/configuration via the wrapper.e2e/tests/tool-config/real-api/auto-run-policy.spec.ts (1)
64-66: Consider usingexpect(townSquare).toBeDefined()instead of throwing.Throwing a raw
Errorwill produce a less informative test failure compared to using Playwright's assertion. This also keeps the failure reporting consistent with other assertions in the file.Suggested change
const townSquare = channels.find((c) => c.name === 'town-square'); - if (!townSquare) { - throw new Error('e2e setup: town-square channel not found'); - } + expect(townSquare, 'e2e setup: town-square channel not found').toBeDefined();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/tests/tool-config/real-api/auto-run-policy.spec.ts` around lines 64 - 66, Replace the manual throw with a Playwright assertion to get consistent, informative test failures: locate the check for townSquare in auto-run-policy.spec.ts (the variable townSquare) and change the throw new Error('e2e setup: town-square channel not found') into expect(townSquare).toBeDefined() so Playwright's reporter/trace captures the failure like other assertions in the file.e2e/helpers/plugin-http.ts (1)
116-126: Inconsistent error handling:getJsonusesstatusTextwhilepostJson/putJsonusereadErrorBody.For consistency and better debugging, consider using
readErrorBodyhere as well to capture response body details on failure.♻️ Suggested fix for consistency
async getJson(path: string, token: string): Promise<unknown> { const response = await fetch(this.pluginUrl(path), { headers: { Authorization: `Bearer ${token}`, }, }); if (!response.ok) { - throw new Error(`${path} failed: ${response.status} ${response.statusText}`); + const text = await readErrorBody(response); + throw new Error(`${path} failed: ${response.status} ${text}`); } return response.json(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/helpers/plugin-http.ts` around lines 116 - 126, getJson currently throws an error using response.statusText which is inconsistent with postJson/putJson; change the failure path in getJson to call the existing readErrorBody helper (same used by postJson/putJson) to capture the response body, await its result, and include that body plus status code in the thrown Error so logs/debugging match postJson/putJson behavior (refer to getJson, postJson, putJson and readErrorBody to locate and align the implementation).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@e2e/helpers/mm.ts`:
- Around line 162-165: The legacy wrapper function login should accept and
forward an optional options parameter to maintain feature parity with
MattermostPage.login; update the signature of export const login to include
options (e.g., options?: LoginOptions) and pass that options argument through
when calling mmPage.login(url, username, password, options) so callers can
specify timeout/configuration via the wrapper.
In `@e2e/helpers/plugin-http.ts`:
- Around line 116-126: getJson currently throws an error using
response.statusText which is inconsistent with postJson/putJson; change the
failure path in getJson to call the existing readErrorBody helper (same used by
postJson/putJson) to capture the response body, await its result, and include
that body plus status code in the thrown Error so logs/debugging match
postJson/putJson behavior (refer to getJson, postJson, putJson and readErrorBody
to locate and align the implementation).
In `@e2e/tests/tool-config/real-api/auto-run-policy.spec.ts`:
- Around line 64-66: Replace the manual throw with a Playwright assertion to get
consistent, informative test failures: locate the check for townSquare in
auto-run-policy.spec.ts (the variable townSquare) and change the throw new
Error('e2e setup: town-square channel not found') into
expect(townSquare).toBeDefined() so Playwright's reporter/trace captures the
failure like other assertions in the file.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 30d42f54-2c62-40bb-9bde-dac9a1759319
📒 Files selected for processing (12)
e2e/helpers/bot-config.tse2e/helpers/mm.tse2e/helpers/mmcontainer.tse2e/helpers/plugin-http.tse2e/helpers/tool-config.tse2e/tests/semantic-search/search-sources.spec.tse2e/tests/system-console/service-management-ui.spec.tse2e/tests/tool-config/real-api/auto-run-policy.spec.tse2e/tests/tool-config/real-api/disabled-tool.spec.tse2e/tests/tool-config/tool-toggle.spec.tse2e/tests/tool-config/user-toggle.spec.tse2e/tests/tool-config/vetted-seed.spec.ts
RunRealAPIContainer (health check + Mattermost) can exceed the default 60s Playwright hook timeout on slow runners; match reasoning-display.spec. Made-with: Cursor
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (56 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path exists.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2). No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to ensure exactly one active config. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex.Secrets handling: API keys (
ServiceConfig.APIKey,AWSSecretAccessKey,MCPServerConfig.ClientSecret, web search API keys) are stored and returned in plaintext via the admin-only endpoint. This is consistent with the prior approach through Mattermost'sLoadPluginConfiguration/system console, which also exposed the same secrets to system admins. The admin-only access restriction is preserved. Historical config rows (Active = false) are not exposed through any API.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write the error message to the response body. Errors are stored inc.Errorsand logged server-side by theginloggermiddleware.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this extends MCP auto-approval to DMs. This is correct — DMs already have tools enabled, and thetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under a cluster mutex. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation. No user-facing attack surface.Webapp changes: System console config now uses dedicated plugin admin API endpoints with the existing Mattermost session token via
Client4.getOptions(). No change in auth boundary.Post meta deletion:
DeletePostMetaForDeletedPostusespost.Idfrom the Mattermost server hook (trusted source) with parameterized SQL. No injection risk.No medium, high, or critical vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
Export REAL_API_BEFORE_ALL_TIMEOUT_MS (180s) from real-api-container and use it in beforeAll for RunRealAPIContainer, tool-config real API, llmbot post component, channel analysis, multiplayer tool-calling, and system console live-service flows so Docker/API setup cannot hit Playwright's default 60s beforeAll timeout on CI. Made-with: Cursor
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (66 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path exists.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2). No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to ensure exactly one active config row. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex. Properly serialized.Secrets handling: API keys (
ServiceConfig.APIKey, web search API keys, MCP secrets) are stored and returned in plaintext via the admin-only endpoint. This is consistent with the prior approach through Mattermost'sLoadPluginConfiguration/system console, which also exposed the same secrets to system admins. The admin-only access restriction is preserved. Historical config rows (Active = false) are not exposed through any API endpoint.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write error message details to the response body. Errors are stored inc.Errorsand logged server-side.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this correctly extends MCP auto-approval to DMs (which already had tools enabled). ThetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under a cluster mutex. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation. No user-facing attack surface.Webapp changes: System console config now uses dedicated plugin admin API endpoints with the existing Mattermost session token via
Client4.getOptions(). No change in auth boundary.Post meta deletion:
DeletePostMetaForDeletedPostusespost.Idfrom the Mattermost server hook (trusted source) with parameterized SQL. No injection risk.No medium, high, or critical vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
Resolve go.mod: keep testcontainers postgres module at v0.38.0 to match testcontainers-go v0.38.0. Made-with: Cursor
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (66 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path exists.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2) or squirrel query builder. No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to enforce exactly one active config row. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex. Properly serialized.Secrets handling: API keys (
ServiceConfig.APIKey,AWSSecretAccessKey,MCPServerConfig.ClientSecret, web search API keys) are stored and returned in plaintext via the admin-only endpoint. This is consistent with the prior approach through Mattermost'sLoadPluginConfiguration/system console, which exposed the same secrets to system admins. The admin-only access restriction is preserved. Historical config rows (Active = false) are not exposed through any API endpoint.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write error message details to the response body. Errors are stored inc.Errorsand logged server-side.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this correctly extends MCP auto-approval to DMs (which already had tools enabled viaappendDMAutoRunOptions). ThetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing. The same logic is consistently applied incompleteAndStreamToolResponsefor tool follow-ups.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under a cluster mutex. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation. No user-facing attack surface.Webapp changes: System console config now uses dedicated plugin admin API endpoints with the existing Mattermost session token via
Client4.getOptions(). No change in auth boundary.Post meta deletion:
DeletePostMetaForDeletedPostusespost.Idfrom the Mattermost server hook (trusted source) with squirrel builder (parameterized). No injection risk.No medium, high, or critical vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
get_channel_info only accepts channel_id or channel_name; the mock used channel_display_name, which is ignored on unmarshal and caused the real MCP tool to fail with insufficient parameters, so the follow-up read_channel step never appeared. Made-with: Cursor
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (66 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path exists.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2) or squirrel query builder. No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to enforce exactly one active config row. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex. Properly serialized.Secrets handling: API keys (
ServiceConfig.APIKey,AWSSecretAccessKey,MCPServerConfig.ClientSecret, web search API keys) are stored and returned in plaintext via the admin-only endpoint. This is consistent with the prior approach through Mattermost'sLoadPluginConfiguration/system console, which exposed the same secrets to system admins. The admin-only access restriction is preserved. Historical config rows (Active = false) are not exposed through any API endpoint.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write error message details to the response body. Errors are stored inc.Errorsand logged server-side.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this correctly extends MCP auto-approval to DMs (which already had tools enabled viaappendDMAutoRunOptions). ThetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing. The same logic is consistently applied incompleteAndStreamToolResponsefor tool follow-ups.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under a cluster mutex. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation. No user-facing attack surface.Webapp changes: System console config now uses dedicated plugin admin API endpoints with the existing Mattermost session token via
Client4.getOptions(). No change in auth boundary.Post meta deletion:
DeletePostMetaForDeletedPostusespost.Idfrom the Mattermost server hook (trusted source) with squirrel builder (parameterized). No injection risk.No medium, high, or critical vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
| // | ||
| // loadLegacyConfig is a function that loads the legacy JSON configuration (e.g., from | ||
| // config.json via the plugin API). It is only called if migration appears necessary. | ||
| func MigrateServicesToBots(cfg Config, loadLegacyConfig func() (LegacyServiceConfig, error)) (Config, bool, error) { |
There was a problem hiding this comment.
Not going to block merge, but I thought the idea was we where going to change the plugin ID and do a breaking change so there is not need for migration code. Migration script as a separate tool if needed. This won't work anyway if we change the pluginid.
There was a problem hiding this comment.
We discussed this offline, we're going to have an upgrade note that you must first upgrade to this version before going beyond 2.x, and we'll change the ID in 2.1
| > | ||
| <option value='none'> | ||
| {intl.formatMessage({defaultMessage: 'None'})} | ||
| <option value='minimal'> |
There was a problem hiding this comment.
What's with this change? Seems unrelated?
- Restore require.True when pgvector is unavailable (match master). - Revert Enterprise slash-command client hook to pass-through for unlicensed users. - Remove MCP integration test container retries; use 4m timeout instead. Made-with: Cursor
|
Reasoning effort dropdown ( This looks unrelated to the config-in-DB migration at first glance, but it is intentional contract alignment: the supported OpenAI Responses API reasoning-effort values we document and validate server-side are Happy to split this into a follow-up PR if we want this changeset strictly scoped to database-backed config only. |
There was a problem hiding this comment.
Stale comment
Security Review: No Findings
Reviewed the full diff (64 files, ~2500 lines changed) for this PR migrating plugin configuration from
config.jsonto database storage (Agents_ConfigHistorytable).Areas Examined
Admin Config API (
GET/PUT /admin/config): Both endpoints are registered underadminRouterwhich appliesmattermostAdminAuthorizationRequired. This middleware authenticates via the server-injectedMattermost-User-Idheader and checksmodel.PermissionManageSystem. No bypass path exists.SQL injection: All queries in
store/config.go,store/system.go, andconversations/store.gouse parameterized placeholders ($1,$2). No string concatenation in query construction.Concurrency & races:
SaveConfigusespg_advisory_xact_lockwithin a transaction plus a partial unique index (WHERE Active = true) to enforce at most one active config row. Schema migrations use Morph's advisory lock plus a cluster mutex. Config data migration uses a separate cluster mutex.Secrets handling: API keys (
ServiceConfig.APIKey,AWSSecretAccessKey, web search API keys) are stored and returned in plaintext via the admin-only endpoint. This is consistent with the prior approach through Mattermost'sLoadPluginConfiguration/system console, which also exposed the same secrets to system admins. The admin-only access restriction is preserved. Historical config rows (Active = false) are not exposed through any API.Error information leakage:
handleGetConfigandhandleSaveConfigusec.AbortWithError()which sets the HTTP status code but does not write error details to the response body — errors are added toc.Errorsand logged server-side by theginloggermiddleware.MCP auto-approval scope change (
conversations.go,tool_handling.go): Condition changed fromallowToolsInChannelto!toolsDisabled. SincetoolsDisabled = !isDM && !allowToolsInChannel, this extends MCP auto-approval to DMs. This is correct — DMs already have tools enabled and auto-run support viaappendDMAutoRunOptions, and thetoolPolicyCheckerstill validates each individual tool's per-tool policy before auto-executing.Cluster event handling:
OnPluginClusterEventreads config from the database (trusted source), not from the cluster event payload. No injection vector.Config migration: Runs server-side at plugin activation under a cluster mutex. The
loadLegacyConfigcallback only callsp.API.LoadPluginConfiguration, a server-side operation. No user-facing attack surface.Post meta deletion:
DeletePostMetaForDeletedPostusespost.Idfrom the Mattermost server hook (trusted source) with parameterized SQL via squirrel builder. No injection risk.Webapp changes: System console config now uses dedicated plugin admin API endpoints with the existing Mattermost session token via
Client4.getOptions(). No change in auth boundary.E2E test helpers: Config operations switched from
mmctl config patchto plugin admin API calls with properAuthorization: Bearerheaders sourced fromclient.getToken(). No hardcoded credentials.No medium, high, or critical severity vulnerabilities identified in added or modified code.
Sent by Cursor Automation: Find vulnerabilities
runReindexJob mutates the job status pointer; returning newJobStatus after starting the goroutine could observe JobStatusCompleted on fast runners. Return a snapshot of the initial running status before the goroutine runs. Made-with: Cursor
There was a problem hiding this comment.
Security Review: No Issues Found
Reviewed the config migration from config.json to database storage, including the new admin API endpoints, database store layer, cluster event propagation, and frontend changes. No medium/high/critical vulnerabilities identified.
Key areas verified:
- Admin authorization: New
GET/PUT /admin/configendpoints are correctly gated bymattermostAdminAuthorizationRequiredmiddleware, which checksPermissionManageSystemvia the server-injectedMattermost-User-Idheader - SQL injection: All queries in
store/config.goandstore/system.gouse parameterized statements ($1,$2) - Secret handling: API keys/credentials in config are only accessible to system admins (consistent with prior config.json behavior); no secrets are logged during migration or cluster events
- Concurrency: Config saves are serialized with
pg_advisory_xact_lockand schema migrations use cluster mutexes - Cluster events: Internal plugin cluster events (
OnPluginClusterEvent) cannot be triggered by external users
Sent by Cursor Automation: Find vulnerabilities
* Migrate plugin config from config.json to database (#503) * feat: Step B — content block types for conversation entities * feat: Step C — unified ToolRunner for call-execute-recall loop * feat: Step A — schema + store CRUD for conversation entities * fix: Step B — merge tool_result into matching tool_use in BlocksToPost * fix: lint auto-fix for unnecessary type conversion in content block test * feat: Step N — webapp conversation hook and API client * feat: Step F — streaming turn persistence * feat: Step D — conversation service core * fix: Step F — persist annotation data and cleaned text in turn blocks * feat: Step O — webapp LLMBotPost rewrite with conversation entities * feat: Step P — webapp RHS threads and client cleanup * feat: Step K — search as conversation entities * feat: Step J — channel analysis as conversation entities * feat: Step I — thread analysis as conversation entities * feat: Step H — channel mentions and multi-bot via conversation entities * feat: Step G — DM conversations via conversation entities * Fix gosec G115 integer overflow warnings in conversation store * fix: wire conversation service to search in server/main.go * fix: restore prematurely removed tool private endpoints and fix thread analysis test setup * feat: Step L — remove old code paths replaced by conversation entities * feat: Step M — drop LLM_PostMeta table * fix: update e2e tests for conversation entities migration * Address PR review feedback: race conditions, auth bypass, and SearchSources * Fix missing parameter in api_admin_test.go New() call after merge * Fix conversation entity migration, race conditions, and API issues - Add title backfill from LLM_PostMeta before dropping table (migration 6) - Clean duplicate sequences before adding UNIQUE constraint (migration 7) - Add UserID index for conversation summaries query (migration 8) - Add atomic CreateTurnAutoSequence with retry to prevent TOCTOU races - Set conversation_id on posts before creation to fix frontend timing hole - Split ProcessDMRequest so conversation is created before placeholder post - Guard title generation to only fire on new conversations - Remove duplicate LLM loop from HandleToolResult (visibility-only update) - Extract shouldAutoExecuteTool to deduplicate 4 identical callbacks - Replace correlated subquery with LEFT JOIN in summaries query - Return ErrConversationNotFound from GetConversationByThreadAndBot - Add frontend loading/error states for conversation fetch * Fix conversation entity bugs: race conditions, missing data, auth gaps, and web search annotations * Fix jsonb insert by casting json.RawMessage to string for lib/pq * Fix streaming regression: forward LLM tokens in real-time instead of buffering * Fix tool approval flow: auto-approved status, deriveApprovalStage, and helpers * Fix tool result visibility and auto_run_everywhere share-skip behavior * Address CodeRabbitAI feedback: messageBuilder sync, dead siteURL guard, channel shared flag * Fix chat history threads query, channel analysis RootPostID, and e2e test robustness * Scope tool call WebSocket broadcasts to requester and sanitize turn content for display * Gate channel tool follow-up on Share approval and scope conversations per user * Collapse conversation-entity migrations into 000005 and drop stale code * Fix multi-round tool rendering and avoid empty bifrost tool results * Create streaming assistant turn at finalize so it follows tool-round turns * Aggregate tool calls backwards from anchor to find tool-round turns * Scope tool-call aggregation per post and keep pending-tool text intact * Add e2e coverage for per-post tool-call scoping after approval flow * Restore requester ownership fallback, fix search prop race, and error-cache useConversation * Fix channel tool merging, persisted citations, interval RootPostID, and tool-approval error classes * Remove sendOAuthNotifications ephemeral-post path to match master's tool-provider UI model * Repair unique index on LLM_Conversations to be per-user for legacy installs * Gate channel auto-execution on auto_run_everywhere and pass full tool context to follow-up LLM * Tag auto-executed tool rounds as auto_approved and align e2e tests with channel policy * Revert auto_run_everywhere seeding in multiplayer tool-calling tests * Stop the rejected/keep-private channel tool-approval loop * Rename MCP tool policy auto_run to auto_run_in_dm * Add structured fields to LLM Call log and entry logs on approval handlers * Carry resolved tool-call status on ToolTurn.AssistantToolCalls * Add decided_at on tool_result blocks to distinguish pending vs kept-private * Server-compute approval_state and delete client-side auto-submit loop * Make HandleToolResult idempotent via decided_at and drop heuristics * Redact kept-private tool results before sending to channel-visible LLM follow-ups * Redact unshared tool content by default and pin the contract with tests * Address review feedback: owner-only channel analysis and redact unshared tool args * Address review feedback: honor tool-enabled in share decision and align streaming isDM with IsDMWith


Summary
config.jsonto direct database storage using Morph-based schema migrationsstore/package with Agents_System, Agents_ConfigHistory, and LLM_PostMeta tables managed by Morph/admin/config) with cluster event propagation for HA environmentsKey Design Decisions
Commits (4 atomic phases)
c3f9c8c— Store package with Morph migration infrastructure8e40275— Config DB persistence and one-time data migration62ce319— Admin config API and cluster event propagation9942aef— Remove dead code (database/, OnConfigurationChange)Test plan
go test ./store/...— Integration tests with real PostgreSQL via testcontainers (migrations, system KV, config CRUD, history, data migration lifecycle)go test ./config/...— Legacy migration transform tests (pure functions, no mocking)go test ./api/...— Config endpoint tests (GET/PUT, round-trip, error cases)make deployon fresh install — Morph creates all tables, system console works with empty/default configmake deployupgrade from existing install — Config migrated from config.json to DB, system console loads from DB🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
Tests