Simple ChatGPT Backend is a server-side Node.js application that uses Express.js. It serves as a bridge between client-side applications and the OpenAI API, securely handling sensitive API keys.
This can be useful for those who wish to utilize the OpenAI's API without the risk of exposing their secret API keys.
Please note: Although our API key will be secure, our endpoint could potentially be subject to misuse. To enhance security, you could consider implementing features such as usage limitations, IP-based access controls, or user authentication. These measures can help to prevent unauthorized or excessive use of our service.
- Screenshots
- Prerequisites
- How to Run
- Environment Variables
- How to get OpenAI API key value
- About Developer
All the parameters that the OpenAI API receives are available, except for the "stream" parameter.
You can send a simple or more complex request since all the parameters are optional, except for the "messages" parameter, of course.
const response = await fetch('http://127.0.0.1:3001/api/openai', {
method: 'POST',
headers: {'Content-Type' : 'application/json'},
body: JSON.stringify({
message: 'who are u?'
})
});
const responseData = await response.json();
console.log(responseData);
curl -X POST -H "Content-Type: application/json" -d "{\"message\": \"Who are you?\"}" http://localhost:3001/api/openai
In order to run this application, you will need:
- Node.js 20.6 or higher
- An OpenAI API key (How to get it?)
- Rename
.env.template
to.env
- Inside
.env
set the value to environment variables (How to do that?) - Install the required Node.js packages using your preferred package manager (I suggest you pnpm).
npm install
- Run the server:
npm run dev
The server will start and listen for requests on the port specified in your environment variables. (default port is 3001)
-
PORT
: Port where your server will run. (Optional, default is 3001) -
OPENAI_API_KEY
: Your OpenAI's API Key. (Required) -
OPENAI_API_DEFAULT_MODEL
: Default model to use when making API requests. (Optional, check default value in: .env.template) -
OPENAI_API_DEFAULT_MAX_TOKENS
: Determines the maximum number of tokens that can be generated in a single API response. Tokens are chunks of text, and the total number of tokens affects the cost and duration of an API call. (Optional, check default value in: .env.template) -
OPENAI_API_DEFAULT_TEMPERATURE
: This controls the randomness of the generated text. A higher temperature value, such as 1.0, produces more random and creative responses, while a lower value, like 0.25, generates more focused and deterministic responses. (Optional, check default value in: .env.template) -
OPENAI_API_DEFAULT_TOP_P
: Determines the diversity of the generated text. It uses the top-p sampling technique, where the model only considers the most probable tokens that add up to a cumulative probability (p). A higher value, like 1.0, allows more options and can lead to more varied responses. (Optional, check default value in: .env.template) -
OPENAI_API_DEFAULT_FREQUENCY_PENALTY
: This key adjusts the frequency penalty during text generation. A higher penalty discourages the model from repeating the same phrases or tokens. (Optional, check default value in: .env.template) -
OPENAI_API_DEFAULT_PRESENCE_PENALTY
: Controls the presence penalty during text generation. It discourages the model from focusing on specific phrases or words by reducing their likelihood. The value "0.5" sets the default presence penalty to 0.5. (Optional, check default value in: .env.template)
Note: Instructions on how to get the OpenAI API key value can be found in the How to get OpenAI API key value section.
- Go to the OpenAI platform website https://platform.openai.com.
- Create your account (you will need an email and a phone number to verify your account)
- Once you have created and verified your account, click on the option 'View API keys'
- Click on the 'Create new secret key' button
- Copy the API key and you got the value for OPENAI_API_KEY
Visit my web Carlos Ochoa
Note: If you encounter any issues with the server, please report them here. Contributions are welcome!