-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use any Azure Models other than OpenAI ones #7275
Comments
Are you using the correct endpoints? non OpenAI models endpoint normally have an endpoint in the format You need to create this endpoint for each model in Azure AI Foundry. |
I obtained my endpoint from the overview at https://ai.azure.com/, specifically from the project I created for this task. The project has the capabilities: I attempted to use the endpoints listed there, but I received the 404 error I mentioned earlier when using the Azure AI Services endpoint: While writing this, I realized I made an error when copying the Azure AI Inference endpoint. It seems I accidentally removed the trailing
This time, I receive a 401 error:
I'm not sure if this counts as progress. I have quadruple-checked the API key configured in the |
Yes, the issue seems to be that you're trying to use a OpenAI endpoint instead of a AI Studio (or AI foundry) endpoint. Their naming is confusing and they keep changing them. You will need an endpoint ending with models.ai.azure.com. You will also have one endpoint per model. |
Hi @emerzon, thank you for your patience. I'm sure I've never tried to use the Azure OpenAI endpoint for other models like Mistral-large. I understand these are two separate components in liteLLM and Azure. My endpoint looks something like this: I've explored all the options in the Azure Web Consoles (https://ai.azure.com and https://portal.azure.com), but they consistently provide the same endpoint URL and API Key everywhere. To test whether I have a fundamental misunderstanding of the configuration or setup of liteLLM, I added the Amazon Bedrock serverless inference endpoints without any issues. I'm really stuck with the Azure AI backend. |
Hi @semidark - My bad. I just checked out Azure and noticed that now they offer global endpoints for Mistral models. My current regional endpoints work fine with LiteLLM. |
Ah, I see. So it really is a bug. Good to know. How can I assist with the investigation? As mentioned in my original post, I've just started working with litellm. The bug is probably located somewhere around here: https://github.com/BerriAI/litellm/tree/888b3a25afa514847deae0307d4b7cc495206564/litellm/llms/azure. Additionally, this seems to be the official documentation for creating and using Serverless Inference Endpoints: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-models-serverless?view=azureml-api-2&tabs=azure-studio |
I think I ran into this issue just now. I might have found the issue: the handler for |
Discussed in #7098
Originally posted by semidark December 9, 2024
Hi Folks,
I just got started with LiteLLM as a proxy because I needed an OpenAI-compatible API for my Azure AI Service. I began by working with OpenAI models (
gpt-4o
andgpt-4o-mini
), and everything worked perfectly. So I thought, why not try some other models? I attempted to useMistral-large
.Here’s my configuration:
I tried a few variations of the model string (
mistral-large
,Mistral-large-latest
,mistral-large
), but nothing worked. I always receive the following error:I checked the Metrics of the deployment under
ai.azure.com
and it has a few succesfull requests, bot no input nor output tokens measured.Any ideas on what I might be doing wrong here?
The text was updated successfully, but these errors were encountered: