-
Notifications
You must be signed in to change notification settings - Fork 392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Python usage docs #237
base: main
Are you sure you want to change the base?
Conversation
2fb430a
to
228975a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a very nitpicky review of the basic Python docs (sorry :-).
import schema as sentiment | ||
from typechat import Failure, TypeChatJsonTranslator, TypeChatValidator, create_language_model, process_requests | ||
|
||
async def main(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stray line.
`complete` is just a function that takes a `string` and eventually returns a `string` if all goes well. | ||
|
||
For convenience, TypeChat provides two functions out of the box to connect to the OpenAI API and Azure's OpenAI Services. | ||
You can call these directly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly that's all you need to understand the example; the earlier part of this section (from line 44 on) is advanced stuff that I'd move much further down (maybe with a link from here).
from typechat import Failure, TypeChatJsonTranslator, TypeChatValidator, create_language_model, process_requests | ||
|
||
async def main(): | ||
env_vals = dotenv_values() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This requires a supporting import, a hint on how to install it (there are many modules with dotenv in their name on PyPI, but what we need is pip install python-dotenv
), and a brief explanation of which keys create_language_model()
looks for. I stumbled quite a bit over this. :-(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realize that you discuss this further down -- I wonder if there's a way to present things so that it's easier to just read it from top to bottom.
With `create_language_model`, you can populate your environment variables and pass them in. | ||
Based on whether `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` is set, you'll get a model of the appropriate type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's honestly a bit confusing to call these "environment variables" since they are read from a file, not stored in the OS- (or at least shell-) managed environment variables. Unless the default (vals=None
) actually reads os.environ
?
With `create_language_model`, you can populate your environment variables and pass them in. | ||
Based on whether `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` is set, you'll get a model of the appropriate type. | ||
|
||
The `TypeChatLanguageModel` returned by these functions has a few attributes you might find useful: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The `TypeChatLanguageModel` returned by these functions has a few attributes you might find useful: | |
The `TypeChatLanguageModel` returned by these functions has a few writable attributes you might find useful: | |
`process_requests` takes 3 things. | ||
First, there's the prompt prefix - this is what a user will see before their own text in interactive scenarios. | ||
You can make this playful. | ||
We like to use emoji here. 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? I still haven't figured out how to type emoji on a keyboard -- I only know how to do it on my phone. :-(
|
||
We'll come back to this. | ||
|
||
## Creating the Prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Prompt" is an ambiguous term here. Does it refer to the user input prompt (only used when file_path is None) or the prompt for the LLM?
file_path = sys.argv[1] if len(sys.argv) == 2 else None | ||
await process_requests("😀> ", file_path, request_handler) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Python it's more common to name such a variable or argument filename
or file
.
``` | ||
|
||
We're calling the `translate` method on each string and getting a response. | ||
If something goes wrong, TypeChat will retry requests up to a maximum specified by `retry_max_attempts` on our `model`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a feature of the Model class, right? May be helpful to mention that.
A `TypeChatJsonTranslator` brings all these concepts together. | ||
A translator takes a language model, a validator, and our expected type, and provides a way to translate some user input into objects following our schema. | ||
To do so, it crafts a prompt based on the schema, reaches out to the model, parses out JSON data, and attempts validation. | ||
Optionally, it will craft repair prompts and retry if validation fails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to read more about the repair process. I expect that in practice one might have to tweak this.
No description provided.