Skip to content

Conversation

@Tarquinen
Copy link
Collaborator

Summary

  • Adds hasReasoningInCurrentAssistantTurn to verify Claude has emitted its required thinking block before injecting context info
  • Adds isIgnoredUserMessage utility to identify UI-only messages that aren't sent to the LLM
  • Updates getLastUserMessage to skip ignored messages for accurate model detection
  • Splits GitHub Copilot and Anthropic injection guards into separate checks with different logic

Problem

Claude models with extended thinking require every assistant turn to start with a thinking block. Our context injection was sometimes happening before Claude emitted its thinking block, which would break the conversation.

Solution

For Anthropic models, we now walk backwards through messages to verify there's a reasoning block in the current assistant turn before injecting. This ensures we only inject after Claude has properly started its response.

- Add hasReasoningInCurrentAssistantTurn to verify Claude has emitted its
  required thinking block before we inject context
- Add isIgnoredUserMessage to skip UI-only messages when checking turn state
- Update getLastUserMessage to skip ignored messages for accurate model detection
- Split GitHub Copilot and Anthropic injection guards into separate checks
@Tarquinen Tarquinen merged commit a11223e into dev Jan 14, 2026
1 check passed
@Tarquinen Tarquinen deleted the fix/anthropic-injection-reasoning-check branch January 14, 2026 06:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants