A universal cli for OpenAI, written in BASH.
- Scalable architecture allows for continuous support of new APIs.
- Custom API name, version, and all relevant properties.
- Dry-run mode (without actually initiating API calls) to facilitate debugging of APIs and save costs.
Available APIs:
-
chat/completions
(default API) -
models
-
images/generations
-
embeddings
-
moderations
The default API chat/completions
provides:
- Complete pipelining to interoperate with other applications
- Allow prompts to be read from command line arguments, file, and stdin
- Support streaming
- Support multiple topics
- Support continuous conversations.
- Token usage
-
jq is required.
- Linux:
sudo apt install jq
- macOS:
brew install jq
- Linux:
-
Download script and mark it executable:
curl -fsSLOJ https://go.janlay.com/openai chmod +x openai
You may want to add this file to a directory in
$PATH
.Also install the manual page, e.g.:
pandoc -s -f markdown -t man README.md > /usr/local/man/man1/openai.1
Further reading: curl's killer feature
-OJ
is a killer feature
Now you can try it out!
To begin, type openai -h
to access the help manual.
openai
directly, it may appear to be stuck because it expects prompt content from stdin which is not yet available. To exit, simply press Ctrl+C to interrupt the process.
Why are you so serious?
What happens when the openai
command is executed without any parameters? It means that:
- The default API used will be
chat/completions
, and the schema version will bev1
. - The prompt will be read from stdin.
- The program will wait for input while stdin remains empty.
The best way to understand how to use openai
is to see various usage cases.
- Debug API data for testing purposes
openai -n foo bar
- Say hello to OpenAI
openai Hello
- Use another model
openai +model=gpt-3.5-turbo-0301 Hello
- Disable streaming, allow for more variation in answer
openai +stream=false +temperature=1.1 Hello
- Call another available API
openai -a models
- Create a topic named
en2fr
with initial prompt
openai @en2fr Translate to French
- Use existing topic
openai @en2fr Hello, world!
- Read prompt from clipboard then send result to another topic
pbpaste | openai | openai @en2fr
There are multiple ways to obtain a prompt using openai
:
- Enclose the prompt in single quotes
'
or double quotes"
openai "Please help me translate '你好' into English"
- Use any argument that does not begin with a minus sign
-
openai Hello, world!
- Place any arguments after
--
openai -n -- What is the purpose of the -- argument in Linux commands
- Input from stdin
echo 'Hello, world!' | openai
- Specify a file path with
-f /path/to/file
openai -f question.txt
- Use
-f-
for input from stdin
cat question.txt | openai -f-
Choose any one you like :-)
$OPENAI_API_KEY
must be available to use this tool. Prepare your OpenAI key in ~/.profile
file by adding this line:
export OPENAI_API_KEY=sk-****
Or you may want to run with a temporary key for one-time use:
OPENAI_API_KEY=sk-**** openai hello
Environment variables can also be set in $HOME/.openai/config
.
openai
offers a dry-run mode that allows you to test command composition without incurring any costs. Give it a try!
openai -n hello, world!
# This would be same:
openai -n 'hello, world!'
Command and output
$ openai -n hello, world!
Dry-run mode, no API calls made.
Request URL:
--------------
https://api.openai.com/v1/chat/completions
Authorization:
--------------
Bearer sk-cfw****NYre
Payload:
--------------
{
"model": "gpt-3.5-turbo",
"temperature": 0.5,
"max_tokens": 200,
"stream": true,
"messages": [
{
"role": "user",
"content": "hello, world!"
}
]
}
echo 'hello, world!' | openai -n
For BASH gurus
This would be same:
echo 'hello, world!' >hello.txt
openai -n <hello.txt
Even this one:
openai -n <<<'hello, world!'
and this:
openai -n <<(echo 'hello, world!')
It seems you have understood the basic usage. Try to get real answer from OpenAI:
openai hello, world!
Command and output
$ openai hello, world!
Hello there! How can I assist you today?
Topic starts with a @
sign. so openai @translate Hello, world!
means calling the specified topic translate
.
To create new topic, like translate, with the initial prompt (system role, internally):
openai @translate 'Translate, no other words: Chinese -> English, Non-Chinese -> Chinese'
Then you can use the topic by
openai @translate 'Hello, world!'
You should get answer like 你好,世界!
.
Again, to see what happens, use the dry-run mode by adding -n
. You will see the payload would be sent:
{
"model": "gpt-3.5-turbo",
"temperature": 0.5,
"max_tokens": 200,
"stream": true,
"messages": [
{
"role": "system",
"content": "Translate, no other words: Chinese -> English, Non-Chinese -> Chinese"
},
{
"role": "user",
"content": "Hello, world!"
}
]
}
All use cases above are standalone queries, not converstaions. To chat with OpenAI, use -c
. This can also continue existing topic conversation by prepending @topic
.
Please note that chat requests will quickly consume tokens, leading to increased costs.
To be continued.
To be continued.
This project uses the MIT license. Please see LICENSE for more information.