All notable changes to the "local-ai-code-completion" extension will be documented in this file.
Check Keep a Changelog for recommendations on how to structure this file.
- Config option for generation timeout
- Config options for baseUrl of Ollama API (enables use of the extension with a remote or local Ollama server)
- Improved logging
- Bug where aborting generation would not work
Thanks to @johnnyasantoss for making these changes.
- Options for changing model, temperature and top_p parameters. Thanks to @Entaigner for adding this.
- Switched model to codellama:7b-code-q4_K_S from codellama:7b-code. This noticeably increases generation speed.
- Ollama server seemingly not starting when triggering generation for the first time.
- Additional usage instructions in README.
- Escape key locked to abort generation, causing other escape key functions, such as closing intellisense, to not work.
- Cancel button in progress notification not working.
- Initial release