Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

❓ [Question]: is there any way to use Open AI for free ?? #112

Open
enkii07 opened this issue Oct 24, 2023 · 9 comments
Open

❓ [Question]: is there any way to use Open AI for free ?? #112

enkii07 opened this issue Oct 24, 2023 · 9 comments
Labels
question Further information is requested

Comments

@enkii07
Copy link

enkii07 commented Oct 24, 2023

Your Question

so i tried revered proxy but it's not working, it's showing me this error after 33%

image

ERROR : Exception : GPT3 error: Too many requests, you can send 3 requests per 15 seconds, please wait and try again later.

i know GPT4 is not that expensive but i am a student i just want it for free !! XD

is there any way or should i just buy it?

@enkii07 enkii07 added the question Further information is requested label Oct 24, 2023
@davidmartinrius
Copy link

You could use your own LLM such as Llama 2. But you need a GPU with at least 24GB vram to work comfortably.

@Mr9inety6iX
Copy link

You could use your own LLM such as Llama 2. But you need a GPU with at least 24GB vram to work comfortably.

HOW brother?

@davidmartinrius
Copy link

davidmartinrius commented Nov 5, 2023

You can use https://github.com/oobabooga/text-generation-webui with some model like Llama2, Vicuña, or the model which you prefer (in the project of text-generstion-webui are some suggested models)
Once you installed it and the webui is running you can use the api from your own server instead of using openai. You should make some modifications in ShortGPT but it should be quite simple.
When using this webui you can leverage on your CPU in case that you do not have GPU. Or if you have a GPU below 24GB, you can also use simpler models like Llama2-13B from https://huggingface.co/TheBloke/Llama-2-13B-GGML

If you have more questions I will be glad to respond.

David Martin Rius

@artias13
Copy link

https://github.com/xtekky/gpt4free

@daniyalkhalid7
Copy link

daniyalkhalid7 commented May 6, 2024

You can use https://github.com/oobabooga/text-generation-webui with some model like Llama2, Vicuña, or the model which you prefer (in the project of text-generstion-webui are some suggested models) Once you installed it and the webui is running you can use the api from your own server instead of using openai. You should make some modifications in ShortGPT but it should be quite simple. When using this webui you can leverage on your CPU in case that you do not have GPU. Or if you have a GPU below 24GB, you can also use simpler models like Llama2-13B from https://huggingface.co/TheBloke/Llama-2-13B-GGML

If you have more questions I will be glad to respond.

David Martin Rius

Can you help me make changes in shortgpt?
I can run GPT4 and ollama both locally.
but I don't know how to integrate that in Shortgpt.

@davidmartinrius
Copy link

You can use https://github.com/oobabooga/text-generation-webui with some model like Llama2, Vicuña, or the model which you prefer (in the project of text-generstion-webui are some suggested models) Once you installed it and the webui is running you can use the api from your own server instead of using openai. You should make some modifications in ShortGPT but it should be quite simple. When using this webui you can leverage on your CPU in case that you do not have GPU. Or if you have a GPU below 24GB, you can also use simpler models like Llama2-13B from https://huggingface.co/TheBloke/Llama-2-13B-GGML
If you have more questions I will be glad to respond.
David Martin Rius

Can you help me make changes in shortgpt? I can run GPT4 and ollama both locally. but I don't know how to integrate that in Shortgpt.

You can create you own function like this one:

def gpt3Turbo_completion(chat_prompt="", system="You are an AI that can give the answer to anything", temp=0.7, model="gpt-3.5-turbo", max_tokens=1000, remove_nl=True, conversation=None):

@9volt24
Copy link

9volt24 commented May 19, 2024

https://github.com/xtekky/gpt4free

I tried this and got everything setup, but how am I supposed to find my freegpt4 API key???

@davidmartinrius
Copy link

https://github.com/xtekky/gpt4free

I tried this and got everything setup, but how am I supposed to find my freegpt4 API key???

I guess you missed the "free" part haha

Obviously, you do not need any api key if you integrated gpt4free...

@Skomars
Copy link

Skomars commented Dec 25, 2024

Trying to get this working using my locally installed llama version.
But I dont understand, as mentioned a bit earlier, exactly where to midify the code..

Anyone using a locally installed LLM running on their own server, that can point me to the right direction?

EDIT: I guess I´m just a bit confused, since I´m not sure where to put the url for my locally running server, since I cant find any place in the current gpt_utils.py file where I´m supposed to replace it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants