feat(ai): add clientOutput to tool definitions for server-side result filtering#398
feat(ai): add clientOutput to tool definitions for server-side result filtering#398imsherrill wants to merge 2 commits intoTanStack:mainfrom
clientOutput to tool definitions for server-side result filtering#398Conversation
… filtering Tools that return sensitive data (PII, internal scores, credentials) often need the LLM to see the full result for reasoning while keeping that data off the wire to the client. `clientOutput` is an optional transform on the tool definition that splits the result: the full output feeds back to the LLM as a tool-role message, while only the transformed output is streamed to the client via TOOL_CALL_END.
📝 WalkthroughWalkthroughA new Changes
Sequence DiagramsequenceDiagram
participant Client as Client
participant Engine as TextEngine
participant ToolExec as Tool Executor
participant LLM as Language Model
Client->>Engine: Request with tool use
Engine->>ToolExec: Execute tool call
ToolExec->>ToolExec: Parse tool result
alt clientOutput defined
ToolExec->>ToolExec: Apply clientOutput filter
ToolExec->>Engine: Return filtered result
else clientOutput undefined
ToolExec->>Engine: Return unfiltered result
end
Engine->>Engine: Split result content
Note over Engine: clientContent (filtered)<br/>fullContent (unfiltered)
Engine->>Client: Emit TOOL_CALL_END<br/>(uses clientContent)
Engine->>LLM: Add tool message<br/>(uses fullContent)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit ef97fcf
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1087-1102: The call to the tool's clientOutput inside the
TOOl_CALL_END construction can throw and break the stream; modify the logic
around the clientContent computation (where tool, clientOutput, result, and
fullContent are used) to invoke tool.clientOutput(result.result) inside a
try-catch, and on any error fall back to using fullContent (optionally log or
attach the caught error context), then push the chunks entry with the safe
clientContent instead of directly calling clientOutput.
In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts`:
- Around line 219-225: The current block that applies tool.clientOutput to the
tool result can throw on JSON.parse(result) if result is a non-JSON string and
can throw on JSON.stringify if clientOutput returns circular data; wrap the
clientOutput application in a try/catch around parsing, filtering, and
stringifying (the code handling tool.clientOutput, result, parsed, and
clientResultContent) and on any error fall back to a safe, non-throwing
representation (e.g., use the original result string or a simple
"[unserializable]" placeholder) and log the error; ensure you attempt parsing
only when result is a valid JSON string (or skip parse and pass raw value to
clientOutput), and try JSON.stringify but catch TypeError from circular
structures to avoid breaking the stream.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 97277977-43c1-4c31-97a1-92ceba614d9e
📒 Files selected for processing (7)
packages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/activities/chat/tools/tool-calls.tspackages/typescript/ai/src/activities/chat/tools/tool-definition.tspackages/typescript/ai/src/types.tspackages/typescript/ai/tests/client-output-types.test.tspackages/typescript/ai/tests/tool-call-manager.test.tspackages/typescript/ai/tests/tool-definition.test.ts
| const fullContent = JSON.stringify(result.result) | ||
|
|
||
| // Apply clientOutput filter if the tool defines one | ||
| const tool = this.tools.find((t) => t.name === result.toolName) | ||
| const clientContent = | ||
| tool?.clientOutput && result.state !== 'output-error' | ||
| ? JSON.stringify(tool.clientOutput(result.result)) | ||
| : fullContent | ||
|
|
||
| chunks.push({ | ||
| type: 'TOOL_CALL_END', | ||
| timestamp: Date.now(), | ||
| model: finishEvent.model, | ||
| toolCallId: result.toolCallId, | ||
| toolName: result.toolName, | ||
| result: content, | ||
| result: clientContent, |
There was a problem hiding this comment.
Wrap clientOutput invocation in try-catch to prevent stream breakage.
If clientOutput throws (e.g., due to unexpected result structure or JSON.stringify failing on circular references), the exception will propagate and break the entire stream. Consider catching exceptions and falling back to fullContent.
🛡️ Proposed fix to add error handling
const fullContent = JSON.stringify(result.result)
// Apply clientOutput filter if the tool defines one
const tool = this.tools.find((t) => t.name === result.toolName)
- const clientContent =
- tool?.clientOutput && result.state !== 'output-error'
- ? JSON.stringify(tool.clientOutput(result.result))
- : fullContent
+ let clientContent = fullContent
+ if (tool?.clientOutput && result.state !== 'output-error') {
+ try {
+ clientContent = JSON.stringify(tool.clientOutput(result.result))
+ } catch {
+ // Fall back to full content if clientOutput fails
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const fullContent = JSON.stringify(result.result) | |
| // Apply clientOutput filter if the tool defines one | |
| const tool = this.tools.find((t) => t.name === result.toolName) | |
| const clientContent = | |
| tool?.clientOutput && result.state !== 'output-error' | |
| ? JSON.stringify(tool.clientOutput(result.result)) | |
| : fullContent | |
| chunks.push({ | |
| type: 'TOOL_CALL_END', | |
| timestamp: Date.now(), | |
| model: finishEvent.model, | |
| toolCallId: result.toolCallId, | |
| toolName: result.toolName, | |
| result: content, | |
| result: clientContent, | |
| const fullContent = JSON.stringify(result.result) | |
| // Apply clientOutput filter if the tool defines one | |
| const tool = this.tools.find((t) => t.name === result.toolName) | |
| let clientContent = fullContent | |
| if (tool?.clientOutput && result.state !== 'output-error') { | |
| try { | |
| clientContent = JSON.stringify(tool.clientOutput(result.result)) | |
| } catch { | |
| // Fall back to full content if clientOutput fails | |
| } | |
| } | |
| chunks.push({ | |
| type: 'TOOL_CALL_END', | |
| timestamp: Date.now(), | |
| model: finishEvent.model, | |
| toolCallId: result.toolCallId, | |
| toolName: result.toolName, | |
| result: clientContent, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1087 -
1102, The call to the tool's clientOutput inside the TOOl_CALL_END construction
can throw and break the stream; modify the logic around the clientContent
computation (where tool, clientOutput, result, and fullContent are used) to
invoke tool.clientOutput(result.result) inside a try-catch, and on any error
fall back to using fullContent (optionally log or attach the caught error
context), then push the chunks entry with the safe clientContent instead of
directly calling clientOutput.
|
|
||
| // Apply clientOutput filter if the tool defines one | ||
| if (tool.clientOutput) { | ||
| const parsed = | ||
| typeof result === 'string' ? JSON.parse(result) : result | ||
| clientResultContent = JSON.stringify(tool.clientOutput(parsed)) | ||
| } |
There was a problem hiding this comment.
Potential runtime error when clientOutput is applied to non-JSON string results.
If tool.execute returns a plain string (not valid JSON) and clientOutput is defined, JSON.parse(result) at line 223 will throw a SyntaxError. Additionally, if tool.clientOutput returns a value with circular references, JSON.stringify at line 224 will throw.
Since clientOutput is a convenience feature for filtering sensitive data, failures here should degrade gracefully rather than break the stream.
🛡️ Proposed fix to add defensive error handling
// Apply clientOutput filter if the tool defines one
if (tool.clientOutput) {
- const parsed =
- typeof result === 'string' ? JSON.parse(result) : result
- clientResultContent = JSON.stringify(tool.clientOutput(parsed))
+ try {
+ const parsed =
+ typeof result === 'string' ? JSON.parse(result) : result
+ clientResultContent = JSON.stringify(tool.clientOutput(parsed))
+ } catch {
+ // Fall back to unfiltered content if clientOutput fails
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Apply clientOutput filter if the tool defines one | |
| if (tool.clientOutput) { | |
| const parsed = | |
| typeof result === 'string' ? JSON.parse(result) : result | |
| clientResultContent = JSON.stringify(tool.clientOutput(parsed)) | |
| } | |
| // Apply clientOutput filter if the tool defines one | |
| if (tool.clientOutput) { | |
| try { | |
| const parsed = | |
| typeof result === 'string' ? JSON.parse(result) : result | |
| clientResultContent = JSON.stringify(tool.clientOutput(parsed)) | |
| } catch { | |
| // Fall back to unfiltered content if clientOutput fails | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts` around lines
219 - 225, The current block that applies tool.clientOutput to the tool result
can throw on JSON.parse(result) if result is a non-JSON string and can throw on
JSON.stringify if clientOutput returns circular data; wrap the clientOutput
application in a try/catch around parsing, filtering, and stringifying (the code
handling tool.clientOutput, result, parsed, and clientResultContent) and on any
error fall back to a safe, non-throwing representation (e.g., use the original
result string or a simple "[unserializable]" placeholder) and log the error;
ensure you attempt parsing only when result is a valid JSON string (or skip
parse and pass raw value to clientOutput), and try JSON.stringify but catch
TypeError from circular structures to avoid breaking the stream.
Summary
clientOutputtransform to tool definitions that filters tool results before they reach the clientclientOutputpreserves existing behavior exactlyMotivation
Many AI applications have tools that query databases, user records, or internal systems. These tools often return data the model needs for reasoning (e.g., a user's full profile with SSN, credit score, internal flags) but that should never leave the server and reach the client UI.
Today, tool results follow a single path: the same data goes to both the LLM and the client. The only workaround is middleware (
onChunkoronAfterToolCall), which is per-chat-call configuration rather than a property of the tool itself. This means every call site that uses the tool must remember to add the right filtering logic.clientOutputmakes the filtering declarative and co-located with the tool definition, so it can't be forgotten:Changes
Core (
@tanstack/ai):clientOutput?: (result: any) => anyonToolinterface (types.ts)clientOutput?: (result: InferSchemaType<TOutput>) => unknownonToolDefinitionConfigandClientTool(strongly typed fromoutputSchema)buildToolResultChunks()inTextEnginesplits results: filtered forTOOL_CALL_ENDchunk, full fortoolrole messageToolCallManager.executeTools()applies the same split for the standalone code pathclientOutputpropagates through.server()and.client()via config spread (no additional code needed)Tests:
client-output-types.test.ts): parameter inference fromoutputSchema,@ts-expect-erroron bad property access, propagation through.server()/.client(),anyfallback without schema, optional by defaulttool-definition.test.ts: preservesclientOutputacross definition/server/client, defaults to undefinedtool-call-manager.test.ts: ToolCallManager filtersTOOL_CALL_ENDbut keeps full LLM result,executeToolCallsfiltering, errors bypass filter, no filter when undefinedTest plan
tsc --noEmitclean on core packagetsc --noEmitclean onts-react-chatexample (after build)ts-react-chatexample: browser receives filtered data, LLM receives full data and reasons over it correctlySummary by CodeRabbit
clientOutputtransformation to filter tool results sent to clients, while preserving the full result for the AI model's consumption.