Skip to content

Query LLM with Chain-of-Tought

License

Notifications You must be signed in to change notification settings

ariya/query-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Query LLM

Query LLM is a simple, zero-dependency CLI tool for querying an LLM with questions. It works seamlessly with both cloud-based managed LLM services (e.g. OpenAI GPT, Groq, OpenRouter) and locally hosted LLM servers (e.g. llama.cpp, LocalAI, Ollama, etc). Internally, it guides the LLM to perform step-by-step reasoning following the Chain of Thought approach.

You’ll need either Node.js (>= v18) or Bun to run Query LLM.

./query-llm.js

For quick answers, pipe your question directly:

echo "Indonesia travel destinations?" | ./query-llm.js

To perform specific tasks:

echo "Translate into German: thank you" | ./query-llm.js

Using Local LLM Servers

Supported local LLM servers include llama.cpp, Jan, Ollama, and LocalAI.

To utilize llama.cpp locally with its inference engine, ensure to load a quantized model such as Gemma 2B, Phi-3 Mini, or LLama-3 8B. Adjust the environment variable LLM_API_BASE_URL accordingly:

llama-server --hf-repo LiteLLMs/gemma-2b-it-GGUF --hf-file Q4_0/Q4_0-00001-of-00001.gguf
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1

To use Jan with its local API server, refer to its documentation and load a model like Phi-3 Mini or LLama-3 8B and set the environment variable LLM_API_BASE_URL:

export LLM_API_BASE_URL=http://127.0.0.1:1337/v1
export LLM_CHAT_MODEL='llama3-8b-instruct'

To use Ollama locally, load a model and configure the environment variable LLM_API_BASE_URL:

ollama pull phi3
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='phi3'

For LocalAI, initiate its container and adjust the environment variable LLM_API_BASE_URL:

docker run -ti -p 8080:8080 localai/localai tinyllama-chat
export LLM_API_BASE_URL=http://localhost:3928/v1

Using Managed LLM Services

To use OpenAI GPT model, configure the environment variable OPENAI_API_KEY with your API key:

export OPENAI_API_KEY="sk-yourownapikey"

To use OpenRouter, select a model (e.g. Mistral 7B, LLama-3 8B, OpenChat 3.6, etc) and set the environment variables accordingly.

export LLM_API_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3-8b-instruct"

Query LLM is also compatible with Anyscale, Deep Infra, Fireworks, Groq, Lepton, Novita, Octo, and Together. For details on how to configure the environment variables for each of these services, refer to the documentation of the sister project, Ask LLM.

Evaluating Questions

If there is a text file containing pairs of User and Assistant messages, it can be evaluated with Query LLM:

User: Which planet is the largest?
Assistant: The largest planet is /Jupiter/.

User: and the smallest?
Assistant: The smallest planet is /Mercury/.

Assuming the above content is in qa.txt, executing the following command will initiate a multi-turn conversation with the LLM, asking questions sequentially and verifying answers using regular expressions:

./query-llm.js qa.txt

For additional examples, please refer to the tests/ subdirectory.

Two environment variables can be used to modify the behavior:

  • LLM_DEBUG_FAIL_EXIT: When set, Query LLM will exit immediately upon encountering an incorrect answer, and subsequent questions in the file will not be processed.

  • LLM_DEBUG_PIPELINE: When set, and if the expected regular expression does not match the answer, the internal LLM pipeline will be printed to stdout.