-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enables structured output assistants #171
Conversation
Docs are coming in a separate PR #172 |
|
||
schema_information = "" | ||
if schema: | ||
schema_information = f"JSON will have the following schema:\n\n{schema}\n\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will have -> must have
probably better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point we are just signalizing to the AI the format, in some tests making this prompt more "strict" caused the assistant to enter a loop overusing tools until it had all the information. So this is here more as a hint than an order.
content=( | ||
"In the last step of this chat you will be asked to respond in JSON. " | ||
+ schema_information | ||
+ "Gather information using tools. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there are no tools? Perhaps check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think "Gather information, use appropriate tools if there are any available. " would make this better? The automated tests I wrote don't have tools and it wasn't a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could check if there are any tools or not, then only include that string if there are tools.
[ | ||
*state["messages"], | ||
SystemMessage( | ||
content="Use the information gathered in the conversation to answer." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly is this system message at the end? Is that the recommended way to get a final JSON output?
Also, maybe mention JSON here in this final message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you use with_structured_output
either or both langchain and the model will make the necessary adjustments to ensure the response meets the structure, there's no need to specify anything in the query.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why this is a SystemMessage? Are you following any pattern you saw in a tutorial?
I couldn't find much info online about multiple system messages. Why not a user message here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not following any patterns. Just though it made more sense as as system message. We could as well just don't pass any message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed this to HumanMessage for compatibility with Antropic, removing it also breaks the model, it expects a human message before answering
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check comments.
Adds a
structured_output
attribute to the assistant class that can receive a TypedDict or a Pydantic class. When this attribute is set the llm will be calledwith_structured_output
before responding.