-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swap chat model provider at runtime #1245
Comments
The model to use is indeed constrained to build time |
Thanks @geoand , what is the recommendation on how to approach this ? Can I somehow register all of them with different names and choose one based on another config property ? |
It depends on what exactly you want to achieve. I believe @maxandersen had some example were was doing something similar |
I've achieved this several times using several different approaches. With any of these approaches though it is still a build-time switch, but there aren't any code/config changes.
|
More of a question than an issue first as nothing is really mentioned in the docs.
Version : 0.24.0.CR1
With the following settings :
I was hoping to be able to switch between ollama and openai with the same build depending if I'm running locally or in deployed environment, but it seems that even when then
M1_CHAT_MODEL_PROVIDER=openai
variable is passed to the container ( as in docker container ), the app continues trying to use ollama.Did I miss something or is the model provider constrained to be set at build time ? If it is the case, what is the recommended way to approach this ?
The text was updated successfully, but these errors were encountered: