To set up the Reginald app (which consists of both the full response engine along with the Slack bot), you can use the reginald run_all
on the terminal. To see the CLI arguments, you can simply run:
reginald run_all --help
Note: specifying CLI arguments will override any environment variables set.
Below are the key environment variables that must be set:
You must set the Slack bot environment variables (see the main README for information on obtaining them from Slack):
SLACK_APP_TOKEN
: app token for SlackSLACK_BOT_TOKEN
: bot token for Slack
If you're using a model which uses the OpenAI API, you must set the OPENAI_API_KEY environment variable:
OPENAI_API_KEY
: API key for OpenAI if usingchat-completion-openai
orllama-index-gpt-openai
models
If you're using a model which uses Azure's OpenAI instance, you must set the following environment variables:
OPENAI_AZURE_API_BASE
: API base for Azure OpenAI if usingchat-completion-azure
orllama-index-gpt-azure
modelsOPENAI_AZURE_API_KEY
: API key for Azure OpenAI if usingchat-completion-azure
orllama-index-gpt-azure
models
For creating a data index, you must set the GitHub token environment variable GITHUB_TOKEN
(see the main README for information on obtaining them from GitHub):
GITHUB_TOKEN
: GitHub access token
Lastly, to avoid using CLI variables and be able to simply use reginald run_all
, you can also set the following variables too:
REGINALD_MODEL
: name of model to use (see the models README) for the list of models availableREGINALD_MODEL_NAME
: name of sub-model to use with the one requested if not usinghello
model.- For
llama-index-llama-cpp
andllama-index-hf
models, this specifies the LLM (or path to that model) which we would like to use - For
chat-completion-azure
andllama-index-gpt-azure
, this refers to the deployment name on Azure - For
chat-completion-openai
andllama-index-gpt-openai
, this refers to the model/engine name on OpenAI
- For
LLAMA_INDEX_MODE
: mode to use ("query" or "chat") if usingllama-index
modelLLAMA_INDEX_DATA_DIR
: data directory if usingllama-index
modelLLAMA_INDEX_WHICH_INDEX
: index to use ("handbook", "wikis", "public", "reg" or "all_data") if usingllama-index
modelLLAMA_INDEX_FORCE_NEW_INDEX
: whether to force a new index if usingllama-index
modelLLAMA_INDEX_MAX_INPUT_SIZE
: max input size if usingllama-index-llama-cpp
orllama-index-hf
modelLLAMA_INDEX_IS_PATH
: whether to treat REGINALD_MODEL_NAME as a path if usingllama-index-llama-cpp
modelLLAMA_INDEX_N_GPU_LAYERS
: number of GPU layers if usingllama-index-llama-cpp
modelLLAMA_INDEX_DEVICE
: device to use if usingllama-index-hf
model
Rather than passing in the environment variables on the command line, you can use an environment file, e.g. .env
, and set the variables using:
source .env
To set up the Reginald response engine (without the Slack bot), you can use the reginald run_all_engine
on the terminal. To see the CLI arguments, you can simply run:
reginald run_all_api_llm --help
The CLI arguments are largely the same as reginald run_all
except that the Slack bot tokens are not required (as they will be used to set up the Slack bot which will call the response engine via an API that is set up using reginald run_all_api_llm
). You can also use the same environment variables as reginald run_all
except for the Slack bot tokens.
You can still use the same .env
file that you used for reginald run_all
to set up the environment variables or choose to have a separate .response_engine_env
file to store the environment variables required for the response engine set up.
To set up the Reginald Slack bot (without the response engine), you can use the reginald run_all_api_bot
on the terminal. To see the CLI arguments, you can simply run:
reginald run_all_api_bot --help
This command takes in an emoji to respond with and will set up a Slack bot that responds with the specified emoji (by default, this is the 🚀 emoji if no emoji is specified). You can also set an environment variable for the emoji to respond with using REGINALD_EMOJI
.
You can use the same .env
file that you used for reginald run_all
to set up the environment variables or choose to have a separate .slack_bot_env
file to store the environment variables required for the Slack bot set up. This must include the Slack bot tokens.