Skip to content

litellm._content_to_message_param does not handle the parts of thought correctly #4069

@kiancchen

Description

@kiancchen

** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.

Describe the bug
The messages of thought produced by the model through LiteLLM does not treat as reasoning_content, and then send back the reasoning content as a normal message, which lead to the model stop thinking process.

To Reproduce

  1. Build an agent with LiteLLM and turn on LiteLLM debug mode.
  2. Run the agent, and ask some questions
Image

Expected behavior
The though should be in the reasoning content in the subsequent llm requests.

Desktop (please complete the following information):

  • OS: macOS
  • Python version(python -V): Python 3.11.14
  • ADK version(pip show google-adk): 1.21.0

Model Information:

  • Are you using LiteLLM: Yes
  • Which model is being used: doubao-seed-1-8-251215

I made a possible bugix:

async def _content_to_message_param(
    content: types.Content,
    *,
    provider: str = "",
) -> Union[Message, list[Message]]:
    """Converts a types.Content to a litellm Message or list of Messages.

    Handles multipart function responses by returning a list of
    ChatCompletionToolMessage objects if multiple function_response parts exist.

    Args:
      content: The content to convert.
      provider: The LLM provider name (e.g., "openai", "azure").

    Returns:
      A litellm Message, a list of litellm Messages.
    """

    tool_messages = []
    for part in content.parts:
        if part.function_response:
            response = part.function_response.response
            response_content = (
                response
                if isinstance(response, str)
                else _safe_json_serialize(response)
            )
            tool_messages.append(
                ChatCompletionToolMessage(
                    role="tool",
                    tool_call_id=part.function_response.id,
                    content=response_content,
                )
            )
    if tool_messages:
        return tool_messages if len(tool_messages) > 1 else tool_messages[0]

    # Handle user or assistant messages
    role = _to_litellm_role(content.role)
    message_content = await _get_content(content.parts, provider=provider) or None

    if role == "user":
        return ChatCompletionUserMessage(role="user", content=message_content)
    else:  # assistant/model
        tool_calls = []

        is_thought = False 
        content_present = False
        for part in content.parts:
            if part.function_call:
                tool_calls.append(
                    ChatCompletionAssistantToolCall(
                        type="function",
                        id=part.function_call.id,
                        function=Function(
                            name=part.function_call.name,
                            arguments=_safe_json_serialize(part.function_call.args),
                        ),
                    )
                )
            elif part.text or part.inline_data:
                content_present = True
                is_thought = part.thought

        final_content = message_content if content_present else None
        if final_content and isinstance(final_content, list):
            # when the content is a single text object, we can use it directly.
            # this is needed for ollama_chat provider which fails if content is a list
            final_content = (
                final_content[0].get("text", "")
                if final_content[0].get("type", None) == "text"
                else final_content
            )

        # BUGFIX: set the text to content if the part is a thought
        reasoning_content = None
        if is_thought:
            reasoning_content = final_content
            final_content = None

        return ChatCompletionAssistantMessage(
            role=role,
            content=final_content,
            tool_calls=tool_calls or None,
            reasoning_content=reasoning_content,
        )
`

Metadata

Metadata

Assignees

No one assigned

    Labels

    models[Component] Issues related to model support

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions