Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error "Codex CLI error: Invalid request - The model: code-davinci-002 does not exist" #129

Open
ChadLevy opened this issue Mar 24, 2023 · 18 comments

Comments

@ChadLevy
Copy link

ChadLevy commented Mar 24, 2023

Edit 3: looks like OpenAI shut down their Codex API (https://news.ycombinator.com/item?id=35242069). Apparently there was an e-mail but I never received one.

Also apparently the API is available through Azure (https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/work-with-code). Perhaps an alternative would be to use Azure?

Original:


I'm getting this error on each attempt to use the codex:

Codex CLI error: Invalid request - The model: `code-davinci-002` does not exist

I haven't dug into the code at all but I noticed all of the OpenAI beta URLs have been removed. In the install instructions most redirect but the engines listing URL returns 404 (https://beta.openai.com/docs/engines/codex-series-private-beta). The new URL is https://platform.openai.com/docs/models/codex. It still shows it's in private beta on the new URL.

Edit:

When I query a list of OpenAI engines available to my account, code-davinci-002 is not listed. So I reran the install script and select a different engine available to me (gpt-3.5-turbo). After restarting PowerShell I'm still seeing the same error. I confirmed that the openaiapirc file was correctly updated with the new engine and verified that all instances of PowerShell had been shut down but am still seeing the same error.

Edit 2:
I also updated the current_context.config file with the updated engine. Now the error I receive is

Codex CLI error: Invalid request - This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
@kerbymart
Copy link

It seems code-davinci-002 and code-cushman-001 have been removed, but yes, the Codex CLI does not seem to support gpt-3.5-turbo at the same time.

@hablutzel1
Copy link

This fork is intended to fix this, https://github.com/Lukas-LLS/Codex-CLI. A PR from @Lukas-LLS would be great.

@cyrrill
Copy link

cyrrill commented Mar 28, 2023

Note:

As of March 2023, the Codex models are now deprecated. Please check out our newer Chat models which are able to do many coding tasks with similar capability

as per:

image

https://platform.openai.com/docs/guides/code

Use gpt-3.5-turbo for best results! Once you get API access to GPT-4, move up to that, no point in using Codex models, like davinci anymore.

@loftusa
Copy link

loftusa commented Mar 28, 2023

I'm still getting "cannot find openAI model" errors, even with gpt-3.5-turbo as my model.
EDIT: That was with using the fork mentioned above: https://github.com/Lukas-LLS/Codex-CLI

Using the main branch, I get Codex CLI error: Invalid request - This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions? when I use gpt-3.5-turbo as my model, and gpt-4 straight up doesn't work.

I use this tool every day, so a bugfix would be great!

@Lukas-LLS
Copy link

I can think of two possible causes for your problem:

  1. You might have an old version of the fork, because at the time my fork was posted into this issue I was still working on it and at that point it was not operational (If you update to a newer version make sure to use a cleanup script and setup thereafter, because there a some changes that will break an older setup)
  2. It could also be possible that you still have the setup from https://github.com/microsoft/Codex-CLI. If that is the case, you should run the cleanup script from the original project before running the setup from the fork. (I don't know how you migrated to the fork, so this is just a possibility)

And for gpt-4 make sure you have the model available to you, for that to be the case you must have either signed up on the waitlist: https://openai.com/waitlist/gpt-4-api (from the waitlist you gain access to the gpt-4 model) or you must have the ChatGPT Plus subscription: https://chat.openai.com/chat in the lower left corner the button Upgrade to Plus (from ChatGPT Plus you gain access to the gpt-4-32k model)

If your issue still persists after these steps, let me know and I will look further into it

@loftusa
Copy link

loftusa commented Mar 31, 2023

@Lukas-LLS still having problems. Both gpt-4-32k and gpt-3.5-turbo return Cannot find OpenAI model errors. I have Chat GPT Plus.

Screenshot 2023-03-31 at 6 54 09 AM

The only difference I can think of between what I did and the installation instructions is that I copied my openAI secret key from where I originally stored it - since I think the picture where you can copy it directly from the OpenAI website is outdated (I cannot do that)

@hablutzel1
Copy link

@Lukas-LLS , could you consider submitting a PR to the official project?

@Lukas-LLS
Copy link

I already have submitted a PR #131

@Lukas-LLS
Copy link

@loftusa I have found and fixed your problem. It was limited to the zsh_setup.sh script, therefore I did not find it immediately. The problem was that in some cases the variable name of modelId was modelID. That was due to me carelessly refactoring the name without checking it afterwards. The problem should be fixed with the latest commit.

I also found out that the zsh_setup.sh does only work for macOS and not for zsh under normal Linux due to different implementations of the sed command under these two different operating systems.

@AntonOsika
Copy link

Support for Chat models (GPT 3.5/4) now works on my fork!

Feel free to use it here:

https://github.com/AntonOsika/CLI-Co-Pilot

The required changes in the code were small but non-obvious.

@hablutzel1
Copy link

@AntonOsika , for Bash, your fork is inserting an space at the beginning of the generated commands preventing them to be stored in the history. Can I change this behavior by configuration?

@AntonOsika
Copy link

AntonOsika commented Apr 9, 2023 via email

@Fatfish588
Copy link

@loftusa I have found and fixed your problem. It was limited to the zsh_setup.sh script, therefore I did not find it immediately. The problem was that in some cases the variable name of modelId was modelID. That was due to me carelessly refactoring the name without checking it afterwards. The problem should be fixed with the latest commit.

I also found out that the zsh_setup.sh does only work for macOS and not for zsh under normal Linux due to different implementations of the sed command under these two different operating systems.

I found that after pressing ctrl+G, the generated commands will only be displayed after the issue I raised.like this:

what is running on port 3306sudo lsof -i :3306

image

@Lukas-LLS
Copy link

That was the original behavior of Codex-CLI. I found that `# Your comment here` && had worked for this behavior, although I did not like that way of writing a prompt. I have now changed that way the command is insert in bash and zsh to match Powershell allowing to write normal comments then hitting Ctrl+G and then the command appearing in a new line below the comment. For this to take effect you must update your local repository.

@Fatfish588
Copy link

That was the original behavior of Codex-CLI. I found that `# Your comment here` && had worked for this behavior, although I did not like that way of writing a prompt. I have now changed that way the command is insert in bash and zsh to match Powershell allowing to write normal comments then hitting Ctrl+G and then the command appearing in a new line below the comment. For this to take effect you must update your local repository.

Now it can run great. Thank you.🙏

@pripishchik
Copy link

Support for Chat models (GPT 3.5/4) now works on my fork!

Feel free to use it here:

https://github.com/AntonOsika/CLI-Co-Pilot

The required changes in the code were small but non-obvious.

getting this error in zsh:

Codex CLI error: Unexpected exception - module 'openai' has no attribute 'ChatCompletion'

do you know how to fix it?

@hablutzel1
Copy link

do you know how to fix it?

Try updating the openai package with pip.

@pripishchik
Copy link

@hablutzel1 looks like it works, thanks!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants