-
-
Couldn't load subscription status.
- Fork 4.5k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
What happened?
The codepath for setting attributes logged to Langfuse does not log tool call messages from chat completions, only message content:
litellm/litellm/integrations/arize/_utils.py
Lines 238 to 244 in 137a98a
| for idx, choice in enumerate(response_obj.get("choices", [])): | |
| response_message = choice.get("message", {}) | |
| safe_set_attribute( | |
| span, | |
| SpanAttributes.OUTPUT_VALUE, | |
| response_message.get("content", ""), | |
| ) |
Consider this simple example:
from __future__ import annotations
import litellm
from dotenv import load_dotenv
from litellm import completion
load_dotenv()
litellm.callbacks.append("langfuse_otel")
def test_edit_note():
response = completion(
model="gpt-4o",
messages=[
{
"role": "system",
"content": "Your only job is to call the edit_note tool with the content specified in the user's message.",
},
{
"role": "user",
"content": "Edit the note with the content: 'This is a test note.'",
},
],
tools=[
{
"type": "function",
"function": {
"name": "edit_note",
"description": "Edit the note with the content specified in the user's message.",
"parameters": {
"type": "object",
"properties": {
"content": {"type": "string"},
},
},
},
},
],
)
return response
test_edit_note()
The response_obj passed to set_attributes is:
ModelResponse(id='chatcmpl-C6IbyhQWQHZhSqEA7swiWMEr34lnz', created=1755617546, model='gpt-4o-2024-08-06', object='chat.completion', system_fingerprint='*********', choices=[Choices(finish_reason='tool_calls', index=0, message=Message(content=None, role='assistant', tool_calls=[ChatCompletionMessageToolCall(function=Function(arguments='{"content":"This is a test note."}', name='edit_note'), id='call_kJq2R2zLowjBugk8VXDjdQAU', type='function')], function_call=None, provider_specific_fields={'refusal': None}, annotations=[]), provider_specific_fields={})], usage=Usage(completion_tokens=19, prompt_tokens=82, total_tokens=101, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None)), service_tier='default')
The output logged to Langfuse is just null
What LiteLLM version are you on ?
v1.75.8
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working