You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for making this fantastic tool! Here's a feature request to consider:
Currently, when creating an extraction chain, only two optional input variables (type_description and format_instructions) are acceptable when defining the prompt template.
Would it make sense for it to accept more input variables? For instance, for use cases requiring the LLM to know current date and time, accepting an additional current_datetime input variable would be ideal to supply such context to the LLM.
The text was updated successfully, but these errors were encountered:
Hi @solohan22! Thanks for using the library and sorry for a late response -- I'm on vacation this month and have limited computer access until end of July.
I think this makes sense. The way to do this right now is by creating an extraction chain dynamically with the appropriate template already pre-filled. Instantiation of all the objects is fairly cheap so not really an issue for performance; however, I see how this could be inconvenient as an API.
I'm not planning on writing much code this month, but happy to review PRs if you want to work on this feature.
For instance, for use cases requiring the LLM to know current date and time
I'm wondering if you could explain a bit more about this use case. (I may be able to offer an alternative solution if you're actually working with datetime math.)
Thank you for making this fantastic tool! Here's a feature request to consider:
Currently, when creating an extraction chain, only two optional input variables (
type_description
andformat_instructions
) are acceptable when defining the prompt template.Would it make sense for it to accept more input variables? For instance, for use cases requiring the LLM to know current date and time, accepting an additional
current_datetime
input variable would be ideal to supply such context to the LLM.The text was updated successfully, but these errors were encountered: