Skip to content

fix(ai, ai-anthropic): thinking blocks missing on turn 2+ in tool loops#391

Open
imsherrill wants to merge 2 commits intoTanStack:mainfrom
imsherrill:fix/thinking-blocks-per-step
Open

fix(ai, ai-anthropic): thinking blocks missing on turn 2+ in tool loops#391
imsherrill wants to merge 2 commits intoTanStack:mainfrom
imsherrill:fix/thinking-blocks-per-step

Conversation

@imsherrill
Copy link
Contributor

@imsherrill imsherrill commented Mar 20, 2026

Summary

  • Fixes thinking blocks being merged into a single ThinkingPart per message instead of one per step
  • Preserves thinking content and Anthropic signatures in server-side message history for multi-turn context
  • Adds interleaved-thinking-2025-05-14 beta header when thinking is enabled — required by Anthropic API for thinking on tool-result follow-up turns
  • Each thinking step now tracked by stepId through the full stack: adapter → engine → processor → UIMessage

Changes

@tanstack/ai:

  • ThinkingPart gains optional stepId and signature fields
  • ModelMessage gains optional thinking array for multi-turn context
  • StepFinishedEvent gains optional signature field
  • StreamProcessor handles STEP_STARTED, tracks thinking per-step via Map<stepId, content>
  • TextEngine accumulates thinking + signatures per iteration, includes them in assistant messages
  • buildAssistantMessages preserves ThinkingParts into ModelMessage.thinking
  • updateThinkingPart keys on stepId instead of replacing a single part
  • onThinkingUpdate callback gains stepId parameter

@tanstack/ai-anthropic:

  • Captures signature_delta stream events for thinking block signatures
  • Emits final STEP_FINISHED with signature on content_block_stop
  • Includes thinking blocks with signatures in formatMessages for multi-turn history
  • Passes betas: ['interleaved-thinking-2025-05-14'] when thinking is enabled

@tanstack/ai-client:

  • onThinkingUpdate callback updated for new stepId parameter

Test plan

  • 634 tests pass in @tanstack/ai (4 new tests for multi-step thinking)
  • 198 tests pass in @tanstack/ai-client
  • Types and eslint clean
  • Manual test: Anthropic Sonnet 4.5 — thinking on all 4 turns of multi-step tool flow
  • Manual test: OpenAI o4-mini — reasoning on multiple turns
  • Manual test: client tools with approval — no infinite loop, completes normally

Closes #340

Summary by CodeRabbit

  • New Features
    • Added support for Anthropic's interleaved thinking API, enabling AI models to perform step-based reasoning when thinking budget is configured
    • Enhanced thinking block handling to support multiple reasoning steps per message with provider signatures
    • Improved streaming processor to capture and track individual thinking steps with their corresponding signatures

- Track thinking per-step via stepId instead of merging into single ThinkingPart
- Capture Anthropic signature_delta and preserve through the full stack
- Server-side TextEngine accumulates thinking + signatures per iteration
- Include thinking blocks in Anthropic message history for multi-turn context
- Add interleaved-thinking-2025-05-14 beta header when thinking is enabled
- Add tests for multi-step thinking, backward compat, and result aggregation

Closes TanStack#340
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

This PR implements multi-step thinking support for Anthropic's interleaved-thinking API. Changes add step-aware thinking accumulation across tool loops, capture provider signatures from streaming responses, integrate thinking blocks into assistant messages, and refactor stream state to track thinking per step ID rather than a single aggregate field.

Changes

Cohort / File(s) Summary
Anthropic Adapter
packages/typescript/ai-anthropic/src/adapters/text.ts
Adds betas: ['interleaved-thinking-2025-05-14'] when thinkingBudget is enabled; formats thinking blocks with signatures into Anthropic content blocks; captures and replays thinking signatures during streaming with accumulatedSignature state and signature_delta event handling; emits STEP_FINISHED with signature when thinking blocks complete.
Stream Processor Core
packages/typescript/ai/src/activities/chat/stream/processor.ts, packages/typescript/ai/src/activities/chat/stream/types.ts
Refactors thinking accumulation from single thinkingContent field to multi-step tracking: replaces with thinkingSteps Map, thinkingStepSignatures Map, thinkingStepOrder Array, and currentThinkingStepId. Adds handleStepStartedEvent() handler; updates onThinkingUpdate callback signature to include stepId parameter; reworks aggregation logic to concatenate thinking across ordered steps.
Message Stream Updaters
packages/typescript/ai/src/activities/chat/stream/message-updaters.ts
Updates updateThinkingPart() function to require stepId parameter and match/create thinking parts by step ID; adds stepId and optional signature properties to created/updated ThinkingPart objects.
Chat Engine & Messages
packages/typescript/ai/src/activities/chat/index.ts, packages/typescript/ai/src/activities/chat/messages.ts
Extends TextEngine to capture and persist thinking from STEP_STARTED/STEP_FINISHED chunks with accumulatedThinking state; updates message construction to include thinking array with content and optional signatures; refactors buildAssistantMessages to collect UI thinking parts into pendingThinking and attach to assistant messages.
Type Definitions
packages/typescript/ai/src/types.ts
Extends ModelMessage with optional thinking array containing { content, signature? }; augments ThinkingPart with optional stepId and signature; adds optional signature to StepFinishedEvent.
Chat Client
packages/typescript/ai-client/src/chat-client.ts
Updates StreamProcessor.onThinkingUpdate event handler to accept stepId as second parameter; adjusts handler invocation to match new callback arity.
Tests
packages/typescript/ai/tests/message-updaters.test.ts, packages/typescript/ai/tests/stream-processor.test.ts
Adds stepId argument to updateThinkingPart() call sites and verifies step-aware thinking part creation/updates; introduces STEP_STARTED test helper; verifies separate thinking parts for different stepId values; updates onThinkingUpdate callback assertions to expect (msgId, stepId, content) signature; tests backward compatibility and multi-step concatenation.

Sequence Diagram

sequenceDiagram
    participant Client
    participant AnthropicAPI as Anthropic API
    participant StreamProcessor
    participant TextEngine
    participant Messages as Message Builder

    Client->>AnthropicAPI: Request with betas + thinkingBudget
    AnthropicAPI-->>StreamProcessor: STEP_STARTED (stepId=step-1)
    StreamProcessor->>StreamProcessor: Record pendingThinkingStepId
    
    AnthropicAPI-->>StreamProcessor: STEP_FINISHED (stepId=step-1, thinking content)
    StreamProcessor->>StreamProcessor: Accumulate thinkingSteps[step-1]
    StreamProcessor->>StreamProcessor: Update currentThinkingStepId
    StreamProcessor->>Messages: updateThinkingPart(msgId, step-1, content)
    
    AnthropicAPI-->>StreamProcessor: signature_delta events
    StreamProcessor->>StreamProcessor: Accumulate signature
    
    AnthropicAPI-->>StreamProcessor: content_block_stop (thinking)
    StreamProcessor->>StreamProcessor: Emit STEP_FINISHED with signature
    
    AnthropicAPI-->>StreamProcessor: Text/Tool-use content
    StreamProcessor->>TextEngine: Process stream chunks
    TextEngine->>TextEngine: Persist thinking from STEP_FINISHED
    
    TextEngine->>Messages: Build assistant message with thinking array
    Messages-->>Client: Return message with thinking blocks + signature
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 The rabbit thought—then thought once more,
Through tool loops spinning, stepping o'er,
With signatures traced on each stepping stone,
The thinking comes back—no turn stands alone!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the primary fix: thinking blocks missing on turn 2+ in tool loops. It accurately summarizes the main objective without being vague or misleading.
Description check ✅ Passed The description provides comprehensive details including summary of changes, modifications to each package, test plan with specific numbers, and links to the related issue (#340). All required template sections are adequately addressed.
Linked Issues check ✅ Passed The PR fully addresses issue #340 by implementing per-step thinking tracking, preserving thinking across multi-turn contexts, adding required Anthropic beta headers, and including comprehensive tests for multi-turn tool loops with interleaved thinking.
Out of Scope Changes check ✅ Passed All code changes are directly scoped to addressing the linked issue: thinking blocks are now tracked per-step, persisted in message history, and properly signaled to Anthropic API. No extraneous modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Mar 20, 2026

View your CI Pipeline Execution ↗ for commit c042c55

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 3m 59s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1m 28s View ↗

☁️ Nx Cloud last updated this comment at 2026-03-20 21:44:56 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Mar 20, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@391

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@391

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@391

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@391

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@391

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@391

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@391

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@391

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@391

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@391

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@391

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@391

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@391

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@391

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@391

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@391

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@391

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@391

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@391

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@391

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@391

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@391

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@391

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@391

commit: c042c55

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai/src/activities/chat/messages.ts (1)

181-186: ⚠️ Potential issue | 🟠 Major

Round-tripping through ModelMessage still strips thinking.

Line 185 starts serializing thinking into assistant ModelMessages, but modelMessageToUIMessage() below never materializes modelMessage.thinking back into ThinkingParts. Any history hydrated through normalizeToUIMessage() / ChatClient.append() will silently drop the thinking blocks and signatures, so a later convertMessagesToModelMessages() sends stripped context again.

Patch sketch
 export function modelMessageToUIMessage(
   modelMessage: ModelMessage,
   id?: string,
 ): UIMessage {
   const parts: Array<MessagePart> = []
+
+  if (modelMessage.role === 'assistant' && modelMessage.thinking?.length) {
+    for (const thinkingPart of modelMessage.thinking) {
+      parts.push({
+        type: 'thinking',
+        content: thinkingPart.content,
+        ...(thinkingPart.signature && {
+          signature: thinkingPart.signature,
+        }),
+      })
+    }
+  }
 
   // Handle tool results (when role is "tool") - only produce tool-result part,
   // not a text part (the content IS the tool result, not display text)
   if (modelMessage.role === 'tool' && modelMessage.toolCallId) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/messages.ts` around lines 181 -
186, The assistant message builder adds a thinking field to ModelMessage (see
the push that adds thinking), but
modelMessageToUIMessage()/normalizeToUIMessage()/ChatClient.append() never
rehydrate modelMessage.thinking back into ThinkingPart objects and signatures,
causing round-trip loss; update modelMessageToUIMessage (and any
normalizeToUIMessage/ChatClient.append code paths) to detect
modelMessage.thinking and reconstruct the original ThinkingPart shape (including
reasoning text and signature fields) so that convertMessagesToModelMessages
receives intact thinking blocks on subsequent conversions.
🧹 Nitpick comments (3)
packages/typescript/ai/tests/stream-processor.test.ts (1)

778-830: Add one signature propagation regression here.

These new cases cover stepId, but none assert that a STEP_FINISHED.signature survives into the stored thinking part. That field is what Anthropic needs on the follow-up turn, so this branch is still unguarded by tests.

Test sketch
+    it('should persist signature on a thinking step', () => {
+      const processor = new StreamProcessor()
+      processor.prepareAssistantMessage()
+
+      processor.processChunk(
+        chunk('STEP_FINISHED', {
+          stepId: 'step-1',
+          delta: 'thinking...',
+          signature: 'sig-1',
+        }),
+      )
+
+      expect(processor.getMessages()[0]?.parts[0]).toEqual({
+        type: 'thinking',
+        content: 'thinking...',
+        stepId: 'step-1',
+        signature: 'sig-1',
+      })
+    })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-processor.test.ts` around lines 778 -
830, Add assertions to the tests that verify STEP_FINISHED.signature is
propagated into the stored thinking parts and into the aggregated
state.thinking: when using StreamProcessor.processChunk with
ev.stepFinished(..., stepId) include a signature value and assert that (from
processor.getMessages()[0]!.parts filtered by type 'thinking') each thinking
part has the same signature (e.g., for 'step-1' and 'step-2'), and in the
getResult()/getState() concatenation case assert that state.thinking retains
each step's signature information (or that the per-step signatures are
accessible on the stored thinking parts); locate code via
StreamProcessor.processChunk, ev.stepFinished, processor.getMessages, and
processor.getState to add these checks.
packages/typescript/ai-anthropic/src/adapters/text.ts (2)

395-405: Thinking blocks without signatures are silently filtered out.

The loop only pushes thinking blocks that have a signature. This is correct per Anthropic's API requirements (signatures are mandatory for replaying thinking in multi-turn context), but a brief comment explaining this would help maintainability.

📝 Suggested comment
 if (message.thinking?.length) {
   for (const thinking of message.thinking) {
+    // Anthropic requires signatures to replay thinking blocks in multi-turn context;
+    // blocks without signatures cannot be included.
     if (thinking.signature) {
       contentBlocks.push({
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 395 -
405, Add a short explanatory comment above the loop that filters thinking blocks
to clarify that only thinking entries with a signature are included because
Anthropic requires signatures to replay thinking in multi-turn context;
reference the variables message.thinking, thinking.signature, the
contentBlocks.push of AnthropicContentBlock, and note that omitting unsigned
thinking blocks is intentional for API compliance and not a bug.

292-295: Add a comment explaining the beta header requirement and type assertion.

The beta header interleaved-thinking-2025-05-14 is required to enable interleaved thinking in Claude models during tool-use conversations. Add a brief comment explaining why this beta is needed and why the as any cast is used (the Anthropic SDK types don't include betas in this parameter definition).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 292 -
295, Add an inline comment above the conditional that sets betas explaining that
the beta header "interleaved-thinking-2025-05-14" is required to enable
interleaved thinking for Claude during tool-use conversations, and note that the
`as any` cast is used because the Anthropic SDK types do not include `betas` on
this parameter; update the block around the `thinkingBudget` conditional and the
`betas` property to include that short explanatory comment next to `betas` and
`as any` so future readers understand both the runtime requirement and the
type-workaround.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai/src/activities/chat/messages.ts`:
- Around line 168-169: The issue is that pendingThinking is buffered separately
from current.toolCalls, which loses the original interleaving (e.g., thinking,
tool-call, thinking) when constructing ModelMessage; change the buffering to a
single ordered stream of typed entries (e.g., an array of {type:
'thinking'|'toolCall', payload: ...}) instead of separate pendingThinking and
current.toolCalls so you can preserve insert order when emitting/serializing;
update all places that push to pendingThinking and to current.toolCalls (and the
duplicate buffering logic referenced around the 233-239 region) to push a typed
entry into the unified buffer and update the code that builds the assistant
ModelMessage to iterate this unified buffer and emit thinking/toolCall entries
in original sequence.

In `@packages/typescript/ai/src/activities/chat/stream/processor.ts`:
- Line 146: The field pendingThinkingStepId is not cleared in
resetStreamState(), causing stale step IDs to persist across streams; update the
resetStreamState() method to set this.pendingThinkingStepId = null so that
prepareAssistantMessage() starting a new stream cannot inherit an old id, and
ensure any other stream-reset paths also call resetStreamState() (or explicitly
clear pendingThinkingStepId) to avoid leaking the previous STEP_STARTED
association.

---

Outside diff comments:
In `@packages/typescript/ai/src/activities/chat/messages.ts`:
- Around line 181-186: The assistant message builder adds a thinking field to
ModelMessage (see the push that adds thinking), but
modelMessageToUIMessage()/normalizeToUIMessage()/ChatClient.append() never
rehydrate modelMessage.thinking back into ThinkingPart objects and signatures,
causing round-trip loss; update modelMessageToUIMessage (and any
normalizeToUIMessage/ChatClient.append code paths) to detect
modelMessage.thinking and reconstruct the original ThinkingPart shape (including
reasoning text and signature fields) so that convertMessagesToModelMessages
receives intact thinking blocks on subsequent conversions.

---

Nitpick comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 395-405: Add a short explanatory comment above the loop that
filters thinking blocks to clarify that only thinking entries with a signature
are included because Anthropic requires signatures to replay thinking in
multi-turn context; reference the variables message.thinking,
thinking.signature, the contentBlocks.push of AnthropicContentBlock, and note
that omitting unsigned thinking blocks is intentional for API compliance and not
a bug.
- Around line 292-295: Add an inline comment above the conditional that sets
betas explaining that the beta header "interleaved-thinking-2025-05-14" is
required to enable interleaved thinking for Claude during tool-use
conversations, and note that the `as any` cast is used because the Anthropic SDK
types do not include `betas` on this parameter; update the block around the
`thinkingBudget` conditional and the `betas` property to include that short
explanatory comment next to `betas` and `as any` so future readers understand
both the runtime requirement and the type-workaround.

In `@packages/typescript/ai/tests/stream-processor.test.ts`:
- Around line 778-830: Add assertions to the tests that verify
STEP_FINISHED.signature is propagated into the stored thinking parts and into
the aggregated state.thinking: when using StreamProcessor.processChunk with
ev.stepFinished(..., stepId) include a signature value and assert that (from
processor.getMessages()[0]!.parts filtered by type 'thinking') each thinking
part has the same signature (e.g., for 'step-1' and 'step-2'), and in the
getResult()/getState() concatenation case assert that state.thinking retains
each step's signature information (or that the per-step signatures are
accessible on the stored thinking parts); locate code via
StreamProcessor.processChunk, ev.stepFinished, processor.getMessages, and
processor.getState to add these checks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 19b0cdc6-d9a2-4d83-bb64-7a1aef5b7b29

📥 Commits

Reviewing files that changed from the base of the PR and between c3583e3 and c042c55.

📒 Files selected for processing (10)
  • packages/typescript/ai-anthropic/src/adapters/text.ts
  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/activities/chat/index.ts
  • packages/typescript/ai/src/activities/chat/messages.ts
  • packages/typescript/ai/src/activities/chat/stream/message-updaters.ts
  • packages/typescript/ai/src/activities/chat/stream/processor.ts
  • packages/typescript/ai/src/activities/chat/stream/types.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai/tests/message-updaters.test.ts
  • packages/typescript/ai/tests/stream-processor.test.ts

Comment on lines +168 to 169
let pendingThinking: Array<{ content: string; signature?: string }> = []

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This still loses thinking/tool-call order within a single turn.

pendingThinking is buffered independently from current.toolCalls, so a sequence like [thinking(step-1), tool-call(a), thinking(step-2), tool-call(b)] gets flattened into one assistant ModelMessage with thinking: [step-1, step-2] and toolCalls: [a, b]. If a provider emits multiple thinking blocks around multiple tool calls in one turn, the original interleaving is gone and the next request cannot faithfully replay that history.

Also applies to: 233-239

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/messages.ts` around lines 168 -
169, The issue is that pendingThinking is buffered separately from
current.toolCalls, which loses the original interleaving (e.g., thinking,
tool-call, thinking) when constructing ModelMessage; change the buffering to a
single ordered stream of typed entries (e.g., an array of {type:
'thinking'|'toolCall', payload: ...}) instead of separate pendingThinking and
current.toolCalls so you can preserve insert order when emitting/serializing;
update all places that push to pendingThinking and to current.toolCalls (and the
duplicate buffering logic referenced around the 233-239 region) to push a typed
entry into the unified buffer and update the code that builds the assistant
ModelMessage to iterate this unified buffer and emit thinking/toolCall entries
in original sequence.

private activeMessageIds: Set<string> = new Set()
private toolCallToMessage: Map<string, string> = new Map()
private pendingManualMessageId: string | null = null
private pendingThinkingStepId: string | null = null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

pendingThinkingStepId should be reset in resetStreamState().

If a STEP_STARTED event arrives at the end of one stream but no corresponding STEP_FINISHED follows (e.g., stream aborted), pendingThinkingStepId will retain a stale value. When a new stream starts via prepareAssistantMessage()resetStreamState(), this stale stepId could incorrectly associate with new thinking content.

🐛 Proposed fix
 private resetStreamState(): void {
   this.messageStates.clear()
   this.activeMessageIds.clear()
   this.activeRuns.clear()
   this.toolCallToMessage.clear()
   this.pendingManualMessageId = null
+  this.pendingThinkingStepId = null
   this.finishReason = null
   this.hasError = false
   this.isDone = false
   this.chunkStrategy.reset?.()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/stream/processor.ts` at line 146,
The field pendingThinkingStepId is not cleared in resetStreamState(), causing
stale step IDs to persist across streams; update the resetStreamState() method
to set this.pendingThinkingStepId = null so that prepareAssistantMessage()
starting a new stream cannot inherit an old id, and ensure any other
stream-reset paths also call resetStreamState() (or explicitly clear
pendingThinkingStepId) to avoid leaking the previous STEP_STARTED association.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Anthropic tool loops: thinking blocks missing on turn 2+

1 participant