Skip to content

feat(ai): add clientOutput to tool definitions for server-side result filtering#398

Open
imsherrill wants to merge 2 commits intoTanStack:mainfrom
imsherrill:feat/client-output-filter
Open

feat(ai): add clientOutput to tool definitions for server-side result filtering#398
imsherrill wants to merge 2 commits intoTanStack:mainfrom
imsherrill:feat/client-output-filter

Conversation

@imsherrill
Copy link
Contributor

@imsherrill imsherrill commented Mar 24, 2026

Summary

  • Add optional clientOutput transform to tool definitions that filters tool results before they reach the client
  • The full result is always sent to the LLM (for reasoning), while only the filtered result is streamed to the browser
  • Designed for tools that return sensitive data (PII, internal scores, credentials) that the model needs but the UI should not display
  • Zero breaking changes — omitting clientOutput preserves existing behavior exactly

Motivation

Many AI applications have tools that query databases, user records, or internal systems. These tools often return data the model needs for reasoning (e.g., a user's full profile with SSN, credit score, internal flags) but that should never leave the server and reach the client UI.

Today, tool results follow a single path: the same data goes to both the LLM and the client. The only workaround is middleware (onChunk or onAfterToolCall), which is per-chat-call configuration rather than a property of the tool itself. This means every call site that uses the tool must remember to add the right filtering logic.

clientOutput makes the filtering declarative and co-located with the tool definition, so it can't be forgotten:

const userLookup = toolDefinition({
  name: 'lookup_user',
  description: 'Look up user details',
  outputSchema: z.object({
    id: z.string(),
    name: z.string(),
    ssn: z.string(),
    internalScore: z.number(),
  }),
  clientOutput: (result) => ({
    id: result.id,
    name: result.name,
    // ssn and internalScore never leave the server
  }),
}).server(async (args) => db.users.find(args.query))

Changes

Core (@tanstack/ai):

  • clientOutput?: (result: any) => any on Tool interface (types.ts)
  • clientOutput?: (result: InferSchemaType<TOutput>) => unknown on ToolDefinitionConfig and ClientTool (strongly typed from outputSchema)
  • buildToolResultChunks() in TextEngine splits results: filtered for TOOL_CALL_END chunk, full for tool role message
  • ToolCallManager.executeTools() applies the same split for the standalone code path
  • clientOutput propagates through .server() and .client() via config spread (no additional code needed)
  • Errors bypass the filter (error results are never transformed)

Tests:

  • 7 type-level tests (client-output-types.test.ts): parameter inference from outputSchema, @ts-expect-error on bad property access, propagation through .server()/.client(), any fallback without schema, optional by default
  • 2 unit tests in tool-definition.test.ts: preserves clientOutput across definition/server/client, defaults to undefined
  • 4 unit tests in tool-call-manager.test.ts: ToolCallManager filters TOOL_CALL_END but keeps full LLM result, executeToolCalls filtering, errors bypass filter, no filter when undefined

Test plan

  • All 48 tests pass (13 new + 35 existing)
  • tsc --noEmit clean on core package
  • tsc --noEmit clean on ts-react-chat example (after build)
  • Full chat + middleware test suites pass (94 tests)
  • Manually verified in ts-react-chat example: browser receives filtered data, LLM receives full data and reasons over it correctly

Summary by CodeRabbit

  • New Features
    • Tools can now define an optional clientOutput transformation to filter tool results sent to clients, while preserving the full result for the AI model's consumption.

… filtering

Tools that return sensitive data (PII, internal scores, credentials) often
need the LLM to see the full result for reasoning while keeping that data
off the wire to the client. `clientOutput` is an optional transform on the
tool definition that splits the result: the full output feeds back to the
LLM as a tool-role message, while only the transformed output is streamed
to the client via TOOL_CALL_END.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

A new clientOutput callback feature is introduced across type definitions and runtime implementations, enabling tools to transform results before sending them to clients while preserving full results for language model consumption. The feature is supported through type updates, execution-time filtering logic, and comprehensive test coverage.

Changes

Cohort / File(s) Summary
Type Definitions
packages/typescript/ai/src/types.ts, packages/typescript/ai/src/activities/chat/tools/tool-definition.ts
Added optional clientOutput callback property to Tool, ClientTool, and ToolDefinitionConfig interfaces, enabling per-tool transformation of results with signature (result: InferSchemaType<TOutput>) => unknown.
Tool Execution Logic
packages/typescript/ai/src/activities/chat/index.ts, packages/typescript/ai/src/activities/chat/tools/tool-calls.ts
Implemented clientOutput filtering during tool result processing: parsed results are transformed via clientOutput callback when present, filtered version emitted to client via TOOL_CALL_END.result, while original result stored in tool message for LLM.
Type & Execution Tests
packages/typescript/ai/tests/client-output-types.test.ts, packages/typescript/ai/tests/tool-definition.test.ts
Added TypeScript compile-time type inference tests validating clientOutput parameter types through tool definition, server, and client variants; confirmed clientOutput reference preservation across tool configurations.
Integration Tests
packages/typescript/ai/tests/tool-call-manager.test.ts
Added runtime execution tests verifying filtered results in TOOL_CALL_END chunks, full results preserved in message history for LLM, correct handling of errors (no filtering on output-error state), and fallback behavior when clientOutput is absent.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant Engine as TextEngine
    participant ToolExec as Tool Executor
    participant LLM as Language Model

    Client->>Engine: Request with tool use
    Engine->>ToolExec: Execute tool call
    ToolExec->>ToolExec: Parse tool result
    alt clientOutput defined
        ToolExec->>ToolExec: Apply clientOutput filter
        ToolExec->>Engine: Return filtered result
    else clientOutput undefined
        ToolExec->>Engine: Return unfiltered result
    end
    
    Engine->>Engine: Split result content
    Note over Engine: clientContent (filtered)<br/>fullContent (unfiltered)
    
    Engine->>Client: Emit TOOL_CALL_END<br/>(uses clientContent)
    Engine->>LLM: Add tool message<br/>(uses fullContent)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A filter most clever, applied just in time,
Results split in two, by design so divine,
Clients see what they should, while the model sees all,
Tool results transformed—answering the call!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and clearly summarizes the main feature being added: the clientOutput property for tool definitions to enable server-side result filtering.
Description check ✅ Passed The description is comprehensive and well-structured, covering motivation, changes, and test coverage, though it does not follow the provided template sections (🎯 Changes, ✅ Checklist, 🚀 Release Impact).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Mar 24, 2026

View your CI Pipeline Execution ↗ for commit ef97fcf

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 3m 55s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 2s View ↗

☁️ Nx Cloud last updated this comment at 2026-03-24 22:03:45 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Mar 24, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@398

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@398

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@398

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@398

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@398

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@398

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@398

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@398

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@398

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@398

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@398

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@398

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@398

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@398

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@398

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@398

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@398

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@398

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@398

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@398

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@398

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@398

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@398

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@398

commit: ef97fcf

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1087-1102: The call to the tool's clientOutput inside the
TOOl_CALL_END construction can throw and break the stream; modify the logic
around the clientContent computation (where tool, clientOutput, result, and
fullContent are used) to invoke tool.clientOutput(result.result) inside a
try-catch, and on any error fall back to using fullContent (optionally log or
attach the caught error context), then push the chunks entry with the safe
clientContent instead of directly calling clientOutput.

In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts`:
- Around line 219-225: The current block that applies tool.clientOutput to the
tool result can throw on JSON.parse(result) if result is a non-JSON string and
can throw on JSON.stringify if clientOutput returns circular data; wrap the
clientOutput application in a try/catch around parsing, filtering, and
stringifying (the code handling tool.clientOutput, result, parsed, and
clientResultContent) and on any error fall back to a safe, non-throwing
representation (e.g., use the original result string or a simple
"[unserializable]" placeholder) and log the error; ensure you attempt parsing
only when result is a valid JSON string (or skip parse and pass raw value to
clientOutput), and try JSON.stringify but catch TypeError from circular
structures to avoid breaking the stream.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 97277977-43c1-4c31-97a1-92ceba614d9e

📥 Commits

Reviewing files that changed from the base of the PR and between c3583e3 and ef97fcf.

📒 Files selected for processing (7)
  • packages/typescript/ai/src/activities/chat/index.ts
  • packages/typescript/ai/src/activities/chat/tools/tool-calls.ts
  • packages/typescript/ai/src/activities/chat/tools/tool-definition.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai/tests/client-output-types.test.ts
  • packages/typescript/ai/tests/tool-call-manager.test.ts
  • packages/typescript/ai/tests/tool-definition.test.ts

Comment on lines +1087 to +1102
const fullContent = JSON.stringify(result.result)

// Apply clientOutput filter if the tool defines one
const tool = this.tools.find((t) => t.name === result.toolName)
const clientContent =
tool?.clientOutput && result.state !== 'output-error'
? JSON.stringify(tool.clientOutput(result.result))
: fullContent

chunks.push({
type: 'TOOL_CALL_END',
timestamp: Date.now(),
model: finishEvent.model,
toolCallId: result.toolCallId,
toolName: result.toolName,
result: content,
result: clientContent,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrap clientOutput invocation in try-catch to prevent stream breakage.

If clientOutput throws (e.g., due to unexpected result structure or JSON.stringify failing on circular references), the exception will propagate and break the entire stream. Consider catching exceptions and falling back to fullContent.

🛡️ Proposed fix to add error handling
       const fullContent = JSON.stringify(result.result)

       // Apply clientOutput filter if the tool defines one
       const tool = this.tools.find((t) => t.name === result.toolName)
-      const clientContent =
-        tool?.clientOutput && result.state !== 'output-error'
-          ? JSON.stringify(tool.clientOutput(result.result))
-          : fullContent
+      let clientContent = fullContent
+      if (tool?.clientOutput && result.state !== 'output-error') {
+        try {
+          clientContent = JSON.stringify(tool.clientOutput(result.result))
+        } catch {
+          // Fall back to full content if clientOutput fails
+        }
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const fullContent = JSON.stringify(result.result)
// Apply clientOutput filter if the tool defines one
const tool = this.tools.find((t) => t.name === result.toolName)
const clientContent =
tool?.clientOutput && result.state !== 'output-error'
? JSON.stringify(tool.clientOutput(result.result))
: fullContent
chunks.push({
type: 'TOOL_CALL_END',
timestamp: Date.now(),
model: finishEvent.model,
toolCallId: result.toolCallId,
toolName: result.toolName,
result: content,
result: clientContent,
const fullContent = JSON.stringify(result.result)
// Apply clientOutput filter if the tool defines one
const tool = this.tools.find((t) => t.name === result.toolName)
let clientContent = fullContent
if (tool?.clientOutput && result.state !== 'output-error') {
try {
clientContent = JSON.stringify(tool.clientOutput(result.result))
} catch {
// Fall back to full content if clientOutput fails
}
}
chunks.push({
type: 'TOOL_CALL_END',
timestamp: Date.now(),
model: finishEvent.model,
toolCallId: result.toolCallId,
toolName: result.toolName,
result: clientContent,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1087 -
1102, The call to the tool's clientOutput inside the TOOl_CALL_END construction
can throw and break the stream; modify the logic around the clientContent
computation (where tool, clientOutput, result, and fullContent are used) to
invoke tool.clientOutput(result.result) inside a try-catch, and on any error
fall back to using fullContent (optionally log or attach the caught error
context), then push the chunks entry with the safe clientContent instead of
directly calling clientOutput.

Comment on lines +219 to +225

// Apply clientOutput filter if the tool defines one
if (tool.clientOutput) {
const parsed =
typeof result === 'string' ? JSON.parse(result) : result
clientResultContent = JSON.stringify(tool.clientOutput(parsed))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential runtime error when clientOutput is applied to non-JSON string results.

If tool.execute returns a plain string (not valid JSON) and clientOutput is defined, JSON.parse(result) at line 223 will throw a SyntaxError. Additionally, if tool.clientOutput returns a value with circular references, JSON.stringify at line 224 will throw.

Since clientOutput is a convenience feature for filtering sensitive data, failures here should degrade gracefully rather than break the stream.

🛡️ Proposed fix to add defensive error handling
          // Apply clientOutput filter if the tool defines one
          if (tool.clientOutput) {
-            const parsed =
-              typeof result === 'string' ? JSON.parse(result) : result
-            clientResultContent = JSON.stringify(tool.clientOutput(parsed))
+            try {
+              const parsed =
+                typeof result === 'string' ? JSON.parse(result) : result
+              clientResultContent = JSON.stringify(tool.clientOutput(parsed))
+            } catch {
+              // Fall back to unfiltered content if clientOutput fails
+            }
          }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Apply clientOutput filter if the tool defines one
if (tool.clientOutput) {
const parsed =
typeof result === 'string' ? JSON.parse(result) : result
clientResultContent = JSON.stringify(tool.clientOutput(parsed))
}
// Apply clientOutput filter if the tool defines one
if (tool.clientOutput) {
try {
const parsed =
typeof result === 'string' ? JSON.parse(result) : result
clientResultContent = JSON.stringify(tool.clientOutput(parsed))
} catch {
// Fall back to unfiltered content if clientOutput fails
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts` around lines
219 - 225, The current block that applies tool.clientOutput to the tool result
can throw on JSON.parse(result) if result is a non-JSON string and can throw on
JSON.stringify if clientOutput returns circular data; wrap the clientOutput
application in a try/catch around parsing, filtering, and stringifying (the code
handling tool.clientOutput, result, parsed, and clientResultContent) and on any
error fall back to a safe, non-throwing representation (e.g., use the original
result string or a simple "[unserializable]" placeholder) and log the error;
ensure you attempt parsing only when result is a valid JSON string (or skip
parse and pass raw value to clientOutput), and try JSON.stringify but catch
TypeError from circular structures to avoid breaking the stream.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant