-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Azure OpenAI API, Failed to work however much I try #1061
Comments
Have you tested your azure key in curl or postman to validate all elements: You can use tools like curl or Postman to manually test the endpoint with your key and payload. Example Postman Setup
{ |
I want to use Azure Openai which I tested in Postman and worked with Code 200 OK Headers Value = [API KEY] Body:
This gives a 200 OK Code Going to GPT-RESEARCHER When I set the .env like this
I get this error:
But If I set the .env like this using OPENAI_API_KEY=:
It works and I get this error, stops in the middle
|
This is all the error log. Note: Used Name to hide my endpoint id.
|
@Videmak Can you try adding an env var |
If that doesn't work, @Videmak, try reverting back to version 0.10.3 of the gptr pip package In case it helps, looks like part of the solution is here: |
with the current version (e5adce2) I can use Azure OpenAI without errors. My .env contains this:
My embeddings endpoint has a quota of 700k tokens/minute, and I still get 429 HTTP errors that indicate throttling, but GPT-Researcher automatically retries after a few seconds, as indicated in the log:
Can you confirm it is working for you now, too? |
@Videmak - the logging is not explicit, but I think what happening is that the default settings of GPT-Researcher are OpenAI. So if you don't have a strategic LLM or an embedding defined for Azure, it tries to use OpenAI instead. As I wrote in #1067 (comment) - I think there should be a warning at least, if any parameters are not set, but only some are. In this case, the LLMs are set to Azure, but not the embedding model, which falls back to the default. The fact that default config is used, and that it is OpenAI, should be mentioned in the log, so everyone understands what the problem is. |
Seems to be a common point of confusion My vote is that we add logic to use the azure embedding model by default when one of the LLM config variables are set to azure. New users may not realise that we're leveraging an embedding model |
@danieldekay 's fix is now in the docs |
Describe the bug
I used openai API and everything works fine until I switched to Azure OpenAI api. I had to even redownload latest repository and used a fresh installation. Added a .env file and configures the details
This is my .env file:
To Reproduce
Steps to reproduce the behavior:
When I start researching, works fine until starts generating report. This is the error i egt
I get wrong Key yet I even try one of the two keys provided by Azure. It gives you two keys and any works but here, none works.
The text was updated successfully, but these errors were encountered: