Releases: sigoden/aichat
v0.16.0
New Models
- openai:gpt-4-turbo
- gemini:gemini-1.0-pro-latest (replace gemini:gemini-pro)
- gemini:gemini-1.0-pro-vision-latest (replace gemini:gemini-pro-vision)
- gemini:gemini-1.5-pro-latest
- vertexai:gemini-1.5-pro-preview-0409
- cohere:command-r
- cohere:command-r-plus
New Config
ctrlc_exit: false # Whether to exit REPL when Ctrl+C is pressed
New Features
- use ctrl+enter to newline in REPL (#394)
- support cohere (#397)
- -f/--file take one value and do not enter REPL (#399)
Full Changelog: v0.15.0...v0.16.0
v0.15.0
Breaking Changes
Rename client localai to openai-compatible (#373)
clients:
-- type: localai
++ type: openai-compatible
++ name: localai
Gemini/VertexAI clients add block_threshold
configuration (#375)
block_threshold: BLOCK_ONLY_HIGH # Optional field
New Models
- claude:claude-3-haiku-20240307
- ernie:ernie-4.0-8k
- ernie:ernie-3.5-8k
- ernie:ernie-3.5-4k
- ernie:ernie-speed-8k
- ernie:ernie-speed-128k
- ernie:ernie-lite-8k
- ernie:ernie-tiny-8k
- moonshot:moonshot-v1-8k
- moonshot:moonshot-v1-32k
- moonshot:moonshot-v1-128k
New Config
save_session: null # Whether to save the session, if null, asking
CLI Changes
New REPL Commands
.save session [name]
.set save_session <null|true|false>
.role <name> <text...> # Works in session
New CLI Options
--save-session Whether to save the session
Fix Bugs
- erratic behaviour when using temp role in a session (#347)
- color on non-truecolor terminal (#363)
- not dirty session when updating properties (#379)
- incorrectly render text contains tabs (#384)
Full Changelog: v0.14.0...v0.15.0
v0.14.0
Breaking Changes
Compress session automaticlly (#333)
When the total number of tokens in the session messages exceeds compress_threshold
, aichat will automatically compress the session.
This means you can chat forever in the session.
The default compress_threshold
is 2000, set this value to zero to disable automatic compression.
Rename max_tokens
to max_input_tokens
(#339)
To avoid misunderstandings. The max_input_tokens
also be referred to as context_window
.
models:
- name: mistral
-- max_tokens: 8192
++ max_input_tokens: 8192
New Models
-
claude
- claude:claude-3-opus-20240229
- claude:claude-3-sonnet-20240229
- claude:claude-2.1
- claude:claude-2.0
- claude:claude-instant-1.2
-
mistral
- mistral:mistral-small-latest
- mistral:mistral-medium-latest
- mistral:mistral-larget-latest
- mistral:open-mistral-7b
- mistral:open-mixtral-8x7b
-
ernie
- ernie:ernie-3.5-4k-0205
- ernie:ernie-3.5-8k-0205
- ernie:ernie-speed
Commmand Changes
-c/--code
generate code only (#327)
Chat-REPL Changes
.clear messages
to clear session messages (#332)
Miscellences
Full Changelog: v0.13.0...v0.14.0
v0.13.0
What's Changed
- fix: copy on linux wayland by @sigoden in #288
- fix: deprecation warning of .read command by @Nicoretti in #296
- feat: supports model capabilities by @sigoden in #297
- feat: add openai.api_base config by @sigoden in #302
- feat: add
extra_fields
to models of localai/ollama clients by @kelvie in #298 - fix: do not attempt to deserialize zero byte chunks in ollama stream by @JosephGoulden in #303
- feat: update openai/qianwen/gemini models by @sigoden in #306
- feat: support vertexai by @sigoden in #308
- refactor: update vertexai/gemini/ernie clients by @sigoden in #309
- feat: edit current prompt on $VISUAL/$EDITOR by @sigoden in #314
- refactor: change header of messages saved to markdown by @sigoden in #317
- feat: support
-e/--execute
to execute shell command by @sigoden in #318 - refactor: improve prompt error handling by @sigoden in #319
- refactor: improve saving messages by @sigoden in #322
New Contributors
- @Nicoretti made their first contribution in #296
- @kelvie made their first contribution in #298
- @JosephGoulden made their first contribution in #303
Full Changelog: v0.12.0...v0.13.0
v0.12.0
What's Changed
- feat: change REPL indicators #263
- fix: pipe failed on macos #264
- fix: cannot read image with uppercase ext #270
- feat: support gemini #273
- feat: abandon PaLM2 #274
- feat: support qianwen:qwen-vl-plus #275
- feat: support ollama #276
- feat: qianwen vision models support embeded images #277
- refactor: remove path existence indicator from info #282
- feat: custom REPL prompt #283
Full Changelog: v0.11.0...v0.12.0
v0.11.0
What's Changed
- refactor: improve render #235
- feat: add a spinner to indicate waiting for response #236
- refactor: qianwen client use incremental_output #240
- fix: the last reply tokens was not highlighted #243
- refactor: ernie client system message #244
- refactor: palm client system message #245
- refactor: trim trailing spaces from the role prompt #246
- feat: support vision #249
- feat: state-aware completer #251
- feat: add ernie:ernie-bot-8k qianwen:qwen-max #252
- refactor: sort of some complete type #253
- feat: allow shift-tab to select prev in completion menu #254
Full Changelog: v0.10.0...v0.11.0
v0.10.0
New features
Use ::: for multi-line editing, deprecate .edit
〉::: This
is
a
multi-line
message
:::
Temporarily use a role to send a message.
coder〉.role shell how to unzip a file
unzip file.zip
coder〉
As shown above, you temporarily switched to the shell role in the coder role and sent a message. After sending, the current role is still coder.
Set default role/session with config.prelude
For those who want aichat to enter a session after startup, you can set it as follows:
prelude: session:mysession
For those who want aichat to use a role after startup, you can set it as follows:
prelude: role:myrole
Use a model that is not in the --list-models
If OpenAI releases a new model in the future, it can be used without upgrading Aichat.
$ aichat --model openai:gpt-4-vision-preview
〉.model openai:gpt-4-vision-preview
Changelog
- refactor: improve error message for PaLM client by @sigoden in #213
- refactor: rename Model.llm_name to name by @sigoden in #216
- refactor: use &GlobalConfig to avoid clone by @sigoden in #217
- refactor: remove Model.client_index, match client by name by @sigoden in #218
- feat: allow the use of an unlisted model by @sigoden in #219
- fix: unable to build on android using termux by @sigoden in #222
- feat: add
config.prelude
to allow setting default role/session by @sigoden in #224 - feat: deprecate
.edit
, use """ instead by @sigoden in #225 - refactor: improve repl completer by @sigoden in #226
- feat: temporarily use a role to send a message by @sigoden in #227
- refactor: output info contains auto_copy and light_theme by @sigoden in #230
- fix: unexpected additional newline in REPL by @sigoden in #231
- refactor: use ::: as multipline input indicator, deprecate """ by @sigoden in #232
- feat: add openai:gpt-4-1106-preview by @sigoden in #233
Full Changelog: v0.9.0...v0.10.0
v0.9.0
Support multiple LLMs/Platforms
- OpenAI: gpt-3.5/gpt-4
- LocalAI: opensource models
- Azure-OpenAI: user deployed gpt3.5/gpt4
- PaLM: chat-bison-001
- Ernie: eb-instant/ernie-bot/ernie-bot-4
- Qianwen: qwen-turbo/qwen-plus
Enhance session/conversation
New in command mode
--list-sessions List all available sessions
-s, --session [<SESSION>] Create or reuse a session
New in chat mode
.session Start a context-aware chat session
.info session Show session info
.exit session End the current session
Other features:
- Able to start a conversation that incorporates the last question and answer.
- Deprecate
config.conversation_first
, useaichat -s
instead. - Ask for saving session when exit.
Show information
In command mode
aichat --info # Show system info
aichat --role shell --info # Show role info
aichat --session temp --info # Show session info
In chat mode
.info Print system info
.info role Show role info
.info session Show session info
Support textwrap
Configuration:
wrap: no # Specify the text-wrapping mode (no*, auto, <max-width>)
wrap_code: false # Whether wrap code block
Command:
aichat -w 120 # set max width
aichat -w auto # use term width
aichat -w no # no wrap
New Configuration
light_theme: false # If set true, use light theme
wrap: no # Specify the text-wrapping mode (no*, auto, <max-width>)
wrap_code: false # Whether wrap code block
auto_copy: false # Automatically copy the last output to the clipboard
keybindings: emacs # REPL keybindings, possible values: emacs (default), vi
Chat REPL changelog
- Add
.copy
to Copy the last output to the clipboard - Add
.read
to Read the contents of a file and submit - Add
.edit
for Multi-line editing (CTRL+S to finish) - Add
.info session
to show system info - Add
.info role
to show role info - Rename
.conversation
to.session
- Rename
.clear conversation
to.exit session
- Rename
.clear role
to.exit role
- Deprecate
.clear
- Deprecate
.prompt
- Deprecate
.hisotry
.clear history
Other changes
- Support bracketed paste, You can directly paste multiple lines of text
- Suppport customize theme
- Replace
AICHAT_API_KEY
withOPENAI_API_KEY
, Also supportOPENAI_API_BASE
- Fix duplicate lines in kitty terminal
- Deprecate prompt, both
--prompt
and.prompt
are removed
v0.8.0
What's Changed
- feat: support multiple models by @sigoden in #71
- feat: add
config.connect_timeout
by @sigoden in #76 - feat: add
config.organization_id
by @sigoden in #77 - feat: add
--info
by @sigoden in #79 - feat: check token usage in dry_run mode by @sigoden in #82
- feat: add
--dry-run
by @sigoden in #83
Full Changelog: v0.7.0...v0.8.0
v0.7.0
What's Changed
- feat: provide
--prompt
for adding a prompt from cli by @sigoden in #62 - feat: support more env vars by @sigoden in #63
- feat: support light theme by @sigoden in #65
- feat: add support for NO_COLOR by @sigoden in #67
- feat: support HTTPS_PROXY and ALL_PROXY by @sigoden in #68
- feat: support role args by @sigoden in #69
Full Changelog: v0.6.0...v0.7.0