-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
fix(responses): preserve cached tool call objects in tool result recovery #20700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(responses): preserve cached tool call objects in tool result recovery #20700
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile OverviewGreptile SummaryThis PR fixes tool-call recovery in the Responses→ChatCompletions transformation path by normalizing cached tool-call entries before synthesizing missing This fits into Confidence Score: 3/5
|
| Filename | Overview |
|---|---|
| litellm/responses/litellm_completion_transformation/transformation.py | Adds normalization for cached tool_call entries and extends tool-call chunk creation to accept mapping-like objects; current implementation only supports .get-style objects, so attribute-based Pydantic models may still drop function.name/arguments during recovery. |
| tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py | Adds regression test that caches a ChatCompletionMessageToolCall and asserts recovered tool call preserves function.name/arguments; test may be brittle if those types don’t implement .get in practice. |
Sequence Diagram
sequenceDiagram
participant Caller as Caller
participant Ensure as _ensure_tool_results_have_corresponding_tool_calls
participant Cache as TOOL_CALLS_CACHE
participant Norm as _normalize_tool_use_definition
participant Chunk as _create_tool_call_chunk
participant Asst as Previous assistant msg
Caller->>Ensure: messages, tools?
loop each message[i]
Ensure->>Ensure: if role != "tool" skip
Ensure->>Ensure: find prev_assistant_idx
Ensure->>Ensure: recover tool_call_id if missing
alt tool_call_id missing and removable
Ensure->>Ensure: mark tool message for removal
else tool_call_id present
Ensure->>Asst: _get_tool_calls_list(prev_assistant)
Ensure->>Ensure: _check_tool_call_exists?
alt missing tool_call
Ensure->>Cache: get_cache(tool_call_id)
alt cache miss and tools provided
Ensure->>Ensure: _reconstruct_tool_call_from_tools
end
Ensure->>Norm: normalize(cached_def, tool_call_id)
alt normalized_def present
Ensure->>Chunk: create chunk(normalized_def)
Ensure->>Asst: append tool_call_chunk
else normalization failed
Ensure-->>Ensure: cannot recover tool call
end
end
end
end
Ensure-->>Caller: fixed_messages (with tool_calls added/removals)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 2 comments
| function: Dict[str, Any] = function_raw | ||
| elif hasattr(function_raw, "get"): | ||
| function = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Object .get assumption
_create_tool_call_chunk treats any function_raw with a .get attribute as mapping-like and calls function_raw.get(...). If Function is a Pydantic model (common in this repo), it typically does not implement .get, so this branch won’t run and you’ll fall back to {}, losing name/arguments again. Consider handling attribute-based objects too (e.g. hasattr(function_raw, "name")/"arguments") or normalizing function inside _normalize_tool_use_definition to a plain dict.
| """ | ||
| Normalize cached tool_call definitions to a dict-like shape consumed by _create_tool_call_chunk. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normalization misses attr objects
_normalize_tool_use_definition only supports dict or objects exposing .get. If the cache stores a typed ChatCompletionMessageToolCall/Function as an attribute-based model (no .get), this will return None and tool call recovery won’t happen. To match the PR intent (“object-like”), add an attribute-based path (e.g. getattr(tool_use_definition, "id", None), etc.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a Responses API multi-turn tool-calling recovery bug where cached tool call objects (e.g., ChatCompletionMessageToolCall) were previously treated as invalid because they weren’t dicts, causing recovered tool calls to lose function.name / function.arguments and become malformed.
Changes:
- Update tool-call recovery to normalize cached tool-call objects into a dict-like shape before chunk creation.
- Extend
_create_tool_call_chunk(...)to support object-likefunctionpayloads via.get(...)accessors. - Add a regression test to ensure cached
ChatCompletionMessageToolCallobjects preserve function metadata during recovery.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
litellm/responses/litellm_completion_transformation/transformation.py |
Normalizes cached tool-call entries and preserves tool call metadata when reconstructing missing assistant tool_calls. |
tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py |
Adds regression test covering cached OpenAI-object tool calls in TOOL_CALLS_CACHE. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@greptile |
Greptile OverviewGreptile SummaryThis PR fixes a critical bug where cached Key Changes:
Root Cause: Testing:
Minor observations:
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/responses/litellm_completion_transformation/transformation.py | Adds robust handling for cached tool call objects (both dict-like and attribute-based) through new helper methods _get_mapping_or_attr_value and _normalize_tool_use_definition, fixing data loss when recovering tool calls from cache |
| tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py | Adds comprehensive test coverage for both ChatCompletionMessageToolCall (Pydantic) objects and plain attribute-based objects, validating that cached tool call metadata is properly preserved during recovery |
Sequence Diagram
sequenceDiagram
participant Cache as TOOL_CALLS_CACHE
participant Recovery as Recovery Function
participant Normalize as Normalize Function
participant Helper as Helper Function
participant CreateChunk as Create Chunk Function
participant Message as Assistant Message
Note over Recovery: Detect tool result without tool call
Recovery->>Cache: get cached tool call
Cache-->>Recovery: ChatCompletionMessageToolCall
Recovery->>Normalize: normalize cached object
alt Object is dict
Normalize->>Normalize: use dict directly
else Object is attribute-based
Normalize->>Helper: get id attribute
Helper-->>Normalize: return id value
Normalize->>Helper: get type attribute
Helper-->>Normalize: return type value
Normalize->>Helper: get function name
Helper-->>Normalize: return name value
Normalize->>Helper: get function arguments
Helper-->>Normalize: return arguments value
Normalize->>Normalize: build normalized dict
end
Normalize-->>Recovery: normalized tool definition
Recovery->>CreateChunk: create tool call chunk
CreateChunk->>Helper: extract fields safely
Helper-->>CreateChunk: field values
CreateChunk-->>Recovery: ChatCompletionToolCallChunk
Recovery->>Message: add recovered tool call
Message-->>Recovery: updated message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 1 comment
litellm/responses/litellm_completion_transformation/transformation.py
Outdated
Show resolved
Hide resolved
…tion.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
|
@greptile |
Greptile OverviewGreptile SummaryThis PR fixes tool-call recovery for the Responses API transformation when tool results arrive without a matching The core flow lives in Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/responses/litellm_completion_transformation/transformation.py | Adds normalization/helpers to preserve cached tool call objects when reconstructing tool_calls; however _ensure_tool_results_have_corresponding_tool_calls still assumes dict messages via .get() in role checks, which will crash for object-style messages allowed by its type signature. |
| tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py | Adds regression tests covering cached tool calls stored as OpenAI tool-call objects and as attribute-only objects, ensuring tool call recovery preserves function name/arguments. |
Sequence Diagram
sequenceDiagram
participant Caller
participant Transform as LiteLLMCompletionResponsesConfig
participant Cache as TOOL_CALLS_CACHE
participant Assistant as prev assistant msg
Caller->>Transform: _ensure_tool_results_have_corresponding_tool_calls(messages, tools)
loop for each tool message
Transform->>Transform: _find_previous_assistant_idx()
Transform->>Transform: _get_tool_calls_list(prev_assistant)
alt missing tool_call_id in prev assistant
Transform->>Cache: get_cache(tool_call_id)
alt cache miss and tools provided
Transform->>Transform: _reconstruct_tool_call_from_tools(tool_call_id, tools)
end
Transform->>Transform: _normalize_tool_use_definition(cached_or_reconstructed, tool_call_id)
Transform->>Transform: _create_tool_call_chunk(normalized, tool_call_id, index)
Transform->>Assistant: _add_tool_call_to_assistant(prev_assistant, chunk)
end
end
Transform-->>Caller: fixed_messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 1 comment
Additional Comments (1)
Also appears at transformation.py:774-775. |
11531da
into
BerriAI:litellm_oss_staging_02_09_2026
…very (#20700) * fix(responses): preserve cached tool call objects in tool result recovery * fix(responses): support attr-based cached tool call recovery * Update litellm/responses/litellm_completion_transformation/transformation.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Relevant issues
Fixes #20699
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitCI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🐛 Bug Fix
✅ Test
Changes
_ensure_tool_results_have_corresponding_tool_callsto correctly handle cached tool-call values that are object-like (for exampleChatCompletionMessageToolCall) instead of blindly requiring adict._normalize_tool_use_definition(...)to normalize cached tool-call payloads before creating tool call chunks._create_tool_call_chunk(...)to support object-likefunctionpayloads (e.g.Function) and preservename/arguments.tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.pyto cover cache values stored asChatCompletionMessageToolCallobjects.Root cause
transform_chat_completion_tools_to_responses_toolswritesChatCompletionMessageToolCallobjects intoTOOL_CALLS_CACHE, but_ensure_tool_results_have_corresponding_tool_callspreviously treated any non-dictcache hit as invalid and replaced it with{}. That dropped tool metadata and injected malformed tool calls (name='',arguments='{}').Validation
Executed locally:
poetry run pytest -q tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.pypoetry run pytest -q tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py::TestFunctionCallTransformation::test_ensure_tool_results_preserves_cached_openai_object_tool_call tests/llm_responses_api_testing/test_anthropic_tool_result_fix.py::test_fix_ensures_tool_calls_for_tool_resultsAlso re-ran the issue's reproduction flow and verified the recovered tool call now preserves:
function.name == "search_web"function.arguments == '{"query": "python bugs"}'