-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry on response format failure #266
Conversation
a3990f7
to
25bc880
Compare
As discussed in #86, using |
Looking for examples for a minimal reproduction. @neginraoof can maybe provide. In https://rwilinski.ai/posts/benchmarking-llms-for-structured-json-generation/ the gpt-4o-mini gets 100% on all the attempts. |
Had claude help come up with an example Which the model struggles at creating a valid response to
|
# Allows us to retry on responses that don't match the response format | ||
self.convert_response_to_response_format( | ||
generic_response.response_message, self.prompt_formatter.response_format | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It returns a dictionary, we doni't need it?
self, response_message: str | dict, response_format: Optional[BaseModel] | ||
) -> Optional[dict | str]: | ||
""" | ||
Converts a response message to a specified Pydantic model format. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit lost on the motivation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function we use anyways, I just moved it out so we don't duplicate code
We need to convert to the correct response format from the string that model returns
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry.
Also, does it make sense to put this in prompt_formatter?
So self.prompt_formatter.convert_response_to_response_format(response_message)
.
self, response_message: str | dict, response_format: Optional[BaseModel] | ||
) -> Optional[dict | str]: | ||
""" | ||
Converts a response message to a specified Pydantic model format. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry.
Also, does it make sense to put this in prompt_formatter?
So self.prompt_formatter.convert_response_to_response_format(response_message)
.
Fixes #86