Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feature idea) Automatically set model according to settings & chat size #75

Closed
cheeseonamonkey opened this issue Aug 14, 2023 · 3 comments
Labels
feature Fairly major behavior overhaul or addition low priority If you want it changed, submit a PR polish Minor QoL tweak or keybind

Comments

@cheeseonamonkey
Copy link
Contributor

cheeseonamonkey commented Aug 14, 2023

Automatically set model according to settings & chat size

Proposed feature:

This would be best implemented as an optional setting, that a user could opt into. If enabled, the only models to choose from in the chat config would be:

  • gpt-3.5-turbo
  • gpt-4

The token length models could then be automatically selected only when required, determined by these factors:

  • Chat config settings:
    • Max tokens
    • Max context
  • The size of the message (for example, it is not necessary to use the gpt-3.5-turbo-16k model if you are well under the 4000 token limit for gpt-3.5-turbo-16k)
@cheeseonamonkey cheeseonamonkey changed the title (feature idea) Automatically set model according to settings (feature idea) Automatically set model according to settings & chat size Aug 14, 2023
@jackschedel jackschedel added feature Fairly major behavior overhaul or addition polish Minor QoL tweak or keybind low priority If you want it changed, submit a PR labels Aug 22, 2023
@jackschedel
Copy link
Owner

I agree that this should be implemented as an optional setting.

Low priority change for now until I have access to gpt4-32k xD

@jackschedel
Copy link
Owner

not planned - gpt4-turbo is so cheap and has so much context. individual defaults per folder (#27) is a better solution to this imo.

@jackschedel jackschedel closed this as not planned Won't fix, can't repro, duplicate, stale Jan 11, 2024
@jackschedel
Copy link
Owner

not planned - gpt4-turbo is so cheap and has so much context. individual defaults per folder (#27) is a better solution to this imo.

could also use https://openrouter.ai/models/openrouter/auto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Fairly major behavior overhaul or addition low priority If you want it changed, submit a PR polish Minor QoL tweak or keybind
Projects
None yet
Development

No branches or pull requests

2 participants