Gemini: send original content parts back to model #4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue
LangChain4j does not handle thought signatures correctly for Gemini, which leads to significant degradation of model performance:
Why is LangChain4j's handling incorrect?
It only handles thought signatures for function calls, but not for regular text content parts. Handling signatures in text content parts is optional but highly recommended.
It also moves signatures between content parts. For example, the model returns two parts: text with signature + function call. LangChain4j assumes that all signatures automatically belong to function calls and, when reconstructing this message back to Gemini content, places the signature in the wrong part. The result becomes: text + function call with signature. Notice the signature was moved from the text part to the function call part.
Additionally, LangChain4j merges several text content parts into one when creating
AiMessagefrom Gemini content, making it impossible to reconstruct the correct content part order and thought signature position when sendingAiMessageback to the Gemini API.Solution
Do what the official Gemini Gen SDK does — return the model response exactly as it was received back to the model. This automatically preserves ordering, thought signature content, and position.
Implementation
Original content parts are stored in attributes of the
AiMessage. This means the original parts will be fully preserved in a persistent chat memory store, for example.