Skip to content

Conversation

@dskarzh
Copy link

@dskarzh dskarzh commented Dec 10, 2025

Issue

LangChain4j does not handle thought signatures correctly for Gemini, which leads to significant degradation of model performance:

  • Not being able to "remember" previous messages in chat
  • Not being able to recognize responses for function calls
  • Hallucinations of user messages
  • Answering its own questions

Why is LangChain4j's handling incorrect?

It only handles thought signatures for function calls, but not for regular text content parts. Handling signatures in text content parts is optional but highly recommended.

It also moves signatures between content parts. For example, the model returns two parts: text with signature + function call. LangChain4j assumes that all signatures automatically belong to function calls and, when reconstructing this message back to Gemini content, places the signature in the wrong part. The result becomes: text + function call with signature. Notice the signature was moved from the text part to the function call part.

Additionally, LangChain4j merges several text content parts into one when creating AiMessage from Gemini content, making it impossible to reconstruct the correct content part order and thought signature position when sending AiMessage back to the Gemini API.

Solution

Do what the official Gemini Gen SDK does — return the model response exactly as it was received back to the model. This automatically preserves ordering, thought signature content, and position.

Implementation

Original content parts are stored in attributes of the AiMessage. This means the original parts will be fully preserved in a persistent chat memory store, for example.

@dskarzh dskarzh changed the title [TB] Gemini: send original content parts back to model Gemini: send original content parts back to model Dec 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants