-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing OpenAI models #26
Comments
Hi @jorge-menjivar, just wondering if there was any update on adding these models? |
Hey y'all, this is a real easy patch ~
The model will show up in the dropdown menu :) |
gpt-3.5-turbo-16k has been added |
@jorge-menjivar I think this issue can be closed |
@sebiweise I would like to make it possible to have the option to see all models including the versioned ones, but most importantly models with dynamic names like fine-tuned models. Maybe as an advanced feature that is disabled by default. This would require some moderate changes to our models structure though, which is why I'm leaving for later. |
You mean like a database table just for available models instead of the types integration? |
Yeah something like that would do it. The other issue is getting the proper max token length for arbitrary models. I know ollama at least, doesn’t have a way to do this in their endpoints API so I will need to investigate and do a pr to them (I asked on discord, and no response so I’m assuming there is no way). If we were to do this, there would not be a need to implement new models manually anymore, unless something substantial is different about the model. |
I just removed the PossibleAiModels types integration and returning every possible AiModel now, but I dont know how to get the correct information for maxLength, tokenLimit, requestLimit from the OpenAi API, any idea? Ollama isn´t working either but there is no endpoint for now - like you said. |
We might need to wait for support for this to come from OpenAI or we might have to come up with a new solution to the problem. For OpenAI models we can parse the model id and assume their base model from it. |
@jorge-menjivar What do you think of a new service that will be hosted by me or us, that will have a database of every possible AI model and that will offer a submit form to let users post new found AI models and/or vendors that they found. Maybe we could create a cron job that will gather new known AI models from known vendors automatically. When the models are saved we can have something like a status "draft" so someone can complete the corresponding model settings. |
I like this idea. So basically it's a community database/index for models? |
Also check the pre-release app v0.1.1. I made it so that it detects all models and let's you put the correct token window size in the settings |
Yes, so basically just the types we have now in a database that can be contributed to easily, I would create a new service for that so that we don´t have a problem with another endpoint that isn´t available in the desktop app. So then you can add another api call to that service (maybe later on including a api key) and get the "possible ai models" including all settings, you still need to get all models that the supported vendors in your app support but you could get the settings for max_tokens and so on from another endpoint. |
Just created a basic Nextjs App, for now it will just display the data from the Planetscale db but I´m currently working on the submit form and a little admin dashboard: https://ai-services-web.vercel.app/ https://ai-services-web.vercel.app/vendor https://ai-services-web.vercel.app/model The submit forms aren´t working at the moment: |
@sebiweise Thats really awesome. If vercel gets too slow to handle this let me know. I have over a dozen dedicated servers that I can get you a VPS to run this on at no cost. |
I'd be happy to add missing params if you need help |
Thank you for your help. I think we can just try to use Vercel for now and we will just need to have a look on the insights/usage of the endpoint. I think the database will need an upgrade later on but we will see. The data will be cached so I think for now no problem. |
@sebiweise Looking great! Do we need to keep the max length and request limit? I removed them from the desktop app because they seemed redundant. Max length is just a guess for most models, and request limit could be set to token_limit - 100.
|
Yes I think we will have to take a look if we can remove some of the params and the params are need or not. I just pushed a working example that uses the "AI-Services" to get the possible ai models and the correct params/limits. Main implementation (maybe we can change some code in the future to reduce the http calls that are done): |
Several useful OpenAI models are missing from the types. For example gpt-3.5-turbo-16k as well as the dated models.
The text was updated successfully, but these errors were encountered: