v0.17.0
Break Changing
- always use stream unless set
--no-stream
explicitly (#415) - vertexai config changed: replace
api_base
withproject_id
/location
Self-Hosted Server
AIChat comes with a built-in lightweight web server:
- Provide access to all LLMs using OpenAI format API
- Host LLM playground/arena web applications
$ aichat --serve
Chat Completions API: http://127.0.0.1:8000/v1/chat/completions
LLM Playground: http://127.0.0.1:8000/playground
LLM ARENA: http://127.0.0.1:8000/arena
New Clients
bedrock, vertex-claude, cloudflare, groq, perplexity, replicate, deepseek, zhipuai, anyscale, deepinfra, fireworks, openrouter, octoai, together
New REPL Command
.prompt Create a temporary role using a prompt
.set max_output_tokens
> .prompt your are a js console
%%> Date.now()
1658333431437
.set max_output_tokens 4096
New CLI Options
--serve [<ADDRESS>] Serve the LLM API and WebAPP
--prompt <PROMPT> Use the system prompt
New Configuration Fields
# Set default top-p parameter
top_p: null
# Command that will be used to edit the current line buffer with ctrl+o
# if unset fallback to $EDITOR and $VISUAL
buffer_editor: null
New Features
- add completion scripts (#411)
- shell commands support revision
- add
.prompt
repl command (#420) - customize model's max_output_tokens (#428)
- builtin models can be overwritten by models config (#429)
- serve all LLMs as OpenAI-compatible API (#431)
- support customizing
top_p
parameter (#434) - run without config file by set
AICHAT_CLIENT
(#452) - add
--prompt
option (#454) - non-streaming returns tokens usage (#458)
.model
repl completions show max tokens and price (#462)