Skip to content

feat: add OpenClaw as supported LLM provider#547

Closed
fredmisasi-oc wants to merge 1 commit intomattermost:masterfrom
fredmisasi-oc:master
Closed

feat: add OpenClaw as supported LLM provider#547
fredmisasi-oc wants to merge 1 commit intomattermost:masterfrom
fredmisasi-oc:master

Conversation

@fredmisasi-oc
Copy link
Copy Markdown

@fredmisasi-oc fredmisasi-oc commented Mar 7, 2026

Summary

Adds OpenClaw as a first-class supported LLM provider type in the Agents plugin.

OpenClaw is an open-source personal AI gateway that exposes an OpenAI-compatible HTTP API. This PR allows users to point the Agents plugin at their local (or remote) OpenClaw gateway as the AI backend.

Changes

  • llm/service_types.go — adds ServiceTypeOpenClaw = "openclaw" constant
  • llm/providers.go — registers OpenClaw in the openAICompatibleProviders map with default model anthropic/claude-sonnet-4-6 and a user-configurable API URL (no fixed URL, same pattern as openaicompatible)
  • llm/configuration.go — adds openclaw to the IsValidService switch; requires only APIURL (API key is optional since gateway auth is configured separately)
  • webapp/src/components/system_console/service.tsx — adds OpenClaw to the provider display name map, dropdown, isOpenAIType check, and API URL field visibility
  • llm/providers_test.go — adds TestOpenClawProviderConfig unit test

Configuration

In the plugin admin console, select OpenClaw as the service type and set:

  • API URL: your OpenClaw gateway's OpenAI-compatible endpoint (e.g. http://localhost:18789/v1)
  • API Key: your OpenClaw gateway auth token
  • Default Model: openclaw:main (routes to the main agent) or any model supported by your gateway

The /v1/chat/completions endpoint must be enabled in OpenClaw config:

{ gateway: { http: { endpoints: { chatCompletions: { enabled: true } } } } }

Summary by CodeRabbit

  • New Features
    • Added OpenClaw as a new supported LLM service provider.
    • OpenClaw is now available as a selectable option in service configuration settings.
    • Requires API URL configuration for OpenClaw integration.
    • OpenClaw defaults to using Claude Sonnet 4.6 model.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 7, 2026

📝 Walkthrough

Walkthrough

This PR introduces support for a new OpenClaw LLM service type by adding a service constant, validation logic for API URL requirements, provider registry entry with a default Claude model, corresponding tests, and UI components for configuration.

Changes

Cohort / File(s) Summary
Service Type Definition
llm/service_types.go
Added new exported constant ServiceTypeOpenClaw = "openclaw" to extend supported LLM service types.
Backend Configuration & Provider Registry
llm/configuration.go, llm/providers.go
Added OpenClaw service validation requiring non-empty APIURL and registered OpenClaw as an OpenAI-compatible provider with default model "anthropic/claude-sonnet-4-6".
Testing
llm/providers_test.go
Added test case validating OpenClaw provider configuration (DefaultModel, empty FixedAPIURL, nil CreateTransport, false DisableStreamOptions).
Frontend UI
webapp/src/components/system_console/service.tsx
Extended service type dropdown and API URL field handling to treat OpenClaw as an OpenAI-compatible service with appropriate display mapping.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

Suggested labels

Setup Cloud Test Server

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add OpenClaw as supported LLM provider' clearly and concisely describes the main change: adding OpenClaw as a new LLM provider to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
llm/configuration.go (1)

148-150: Fix the OpenClaw auth comment.

This branch only validates APIURL, but the new comment says an API key is required. That contradiction will mislead the next person touching validation and makes it easy to reintroduce the wrong requirement.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@llm/configuration.go` around lines 148 - 150, The comment for the
ServiceTypeOpenClaw branch is misleading: it states OpenClaw requires an API
key, but the code only validates service.APIURL; update the comment to match the
actual validation or change the validation to also require the API key.
Specifically, in the ServiceTypeOpenClaw branch (reference: ServiceTypeOpenClaw
and service.APIURL) either remove the mention of an API key from the comment or
add a check for the API key field (and include service.ApiKey or the correct
auth field name) so the comment and validation remain consistent.
llm/providers_test.go (1)

69-90: Prefer a table-driven provider-config test here.

This duplicates the same assertion pattern already used for Scale. Folding provider config checks into one table over {serviceType, expectedConfig} would make the next provider addition a data-only change.

As per coding guidelines "Write Go unit tests as table-driven tests whenever possible".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@llm/providers_test.go` around lines 69 - 90, Replace the single-provider
TestOpenClawProviderConfig with a table-driven test that iterates over a slice
of test cases (each containing serviceType and expected config values) and runs
the same assertions for each case; use GetOpenAICompatibleProvider to fetch the
provider in the loop and assert fields DefaultModel, FixedAPIURL,
CreateTransport, and DisableStreamOptions against the expected values for that
case so future providers (e.g., Scale) become data-only additions and the
assertion logic is centralized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@webapp/src/components/system_console/service.tsx`:
- Line 45: The hard-coded "OpenClaw" label is not internationalized; replace the
literal 'OpenClaw' in the display-name map entry (['openclaw', 'OpenClaw']) and
the dropdown label usage at the other location with react-intl usage (e.g.,
useIntl()/intl.formatMessage or <FormattedMessage>) so the UI text comes from a
message descriptor; add a corresponding message id and defaultMessage (e.g., id:
'service.openclaw.name', defaultMessage: 'OpenClaw') to the component's messages
bundle or shared i18n file and use that descriptor for both the map value and
the dropdown label.
- Line 72: The code added 'openclaw' to isOpenAIType but missed updating related
checks: update supportsModelFetching to include 'openclaw' so OpenClaw services
will auto-load models, and change the credential gate logic (the branch that
currently requires an API key for every non-'openaicompatible' provider) to
treat 'openclaw' like 'openaicompatible' (allow URL-only/no-API-key
configuration accepted by IsValidService); ensure all references to provider
capability checks (isOpenAIType, supportsModelFetching, and the credential-check
branch used before calling IsValidService) include 'openclaw' consistently.

---

Nitpick comments:
In `@llm/configuration.go`:
- Around line 148-150: The comment for the ServiceTypeOpenClaw branch is
misleading: it states OpenClaw requires an API key, but the code only validates
service.APIURL; update the comment to match the actual validation or change the
validation to also require the API key. Specifically, in the ServiceTypeOpenClaw
branch (reference: ServiceTypeOpenClaw and service.APIURL) either remove the
mention of an API key from the comment or add a check for the API key field (and
include service.ApiKey or the correct auth field name) so the comment and
validation remain consistent.

In `@llm/providers_test.go`:
- Around line 69-90: Replace the single-provider TestOpenClawProviderConfig with
a table-driven test that iterates over a slice of test cases (each containing
serviceType and expected config values) and runs the same assertions for each
case; use GetOpenAICompatibleProvider to fetch the provider in the loop and
assert fields DefaultModel, FixedAPIURL, CreateTransport, and
DisableStreamOptions against the expected values for that case so future
providers (e.g., Scale) become data-only additions and the assertion logic is
centralized.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 54fe4ed7-e233-43ec-8b7e-cd95092900d2

📥 Commits

Reviewing files that changed from the base of the PR and between c93aa2a and 5f39028.

📒 Files selected for processing (5)
  • llm/configuration.go
  • llm/providers.go
  • llm/providers_test.go
  • llm/service_types.go
  • webapp/src/components/system_console/service.tsx

['cohere', 'Cohere'],
['mistral', 'Mistral'],
['asage', 'asksage (Experimental)'],
['openclaw', 'OpenClaw'],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Internationalize the new OpenClaw labels.

OpenClaw is new user-facing text in both the display-name map and the dropdown, but it is hard-coded instead of going through react-intl.

As per coding guidelines "Always add i18n for new text in user-facing code".

Also applies to: 157-157

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@webapp/src/components/system_console/service.tsx` at line 45, The hard-coded
"OpenClaw" label is not internationalized; replace the literal 'OpenClaw' in the
display-name map entry (['openclaw', 'OpenClaw']) and the dropdown label usage
at the other location with react-intl usage (e.g., useIntl()/intl.formatMessage
or <FormattedMessage>) so the UI text comes from a message descriptor; add a
corresponding message id and defaultMessage (e.g., id: 'service.openclaw.name',
defaultMessage: 'OpenClaw') to the component's messages bundle or shared i18n
file and use that descriptor for both the map value and the dropdown label.

const type = props.service.type;
const intl = useIntl();
const isOpenAIType = type === 'openai' || type === 'openaicompatible' || type === 'azure' || type === 'cohere' || type === 'mistral' || type === 'scale';
const isOpenAIType = type === 'openai' || type === 'openaicompatible' || type === 'azure' || type === 'cohere' || type === 'mistral' || type === 'scale' || type === 'openclaw';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep the OpenClaw capability checks in sync.

Adding openclaw to isOpenAIType only updates part of the form. supportsModelFetching on Line 81 still excludes it, and the credential gate on Line 85 still requires an API key for every non-openaicompatible provider. As a result, OpenClaw services can be selected here but won't auto-load models for the URL-only configuration that IsValidService accepts.

Suggested fix
-    const supportsModelFetching = type === 'anthropic' || type === 'openai' || type === 'azure' || type === 'openaicompatible';
+    const supportsModelFetching =
+        type === 'anthropic' ||
+        type === 'openai' ||
+        type === 'azure' ||
+        type === 'openaicompatible' ||
+        type === 'openclaw';

     useEffect(() => {
-        // For openaicompatible, API key is optional if there's an API URL
-        const hasRequiredCredentials = type === 'openaicompatible' ? (props.service.apiKey || props.service.apiURL) : props.service.apiKey;
+        // For OpenAI-compatible gateways, API key is optional if there's an API URL
+        const hasRequiredCredentials =
+            type === 'openaicompatible' || type === 'openclaw'
+                ? (props.service.apiKey || props.service.apiURL)
+                : props.service.apiKey;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@webapp/src/components/system_console/service.tsx` at line 72, The code added
'openclaw' to isOpenAIType but missed updating related checks: update
supportsModelFetching to include 'openclaw' so OpenClaw services will auto-load
models, and change the credential gate logic (the branch that currently requires
an API key for every non-'openaicompatible' provider) to treat 'openclaw' like
'openaicompatible' (allow URL-only/no-API-key configuration accepted by
IsValidService); ensure all references to provider capability checks
(isOpenAIType, supportsModelFetching, and the credential-check branch used
before calling IsValidService) include 'openclaw' consistently.

@crspeller crspeller closed this Mar 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants