Skip to content

Latest commit

 

History

History
61 lines (32 loc) · 1.32 KB

CHANGELOG.md

File metadata and controls

61 lines (32 loc) · 1.32 KB

Change Log

All notable changes to the "local-ai-code-completion" extension will be documented in this file.

Check Keep a Changelog for recommendations on how to structure this file.

[Unreleased]

[1.2.0] - 2024-01-05

Added

  • Config option for generation timeout
  • Config options for baseUrl of Ollama API (enables use of the extension with a remote or local Ollama server)

Changed

  • Improved logging

Fixed

  • Bug where aborting generation would not work

Thanks to @johnnyasantoss for making these changes.


[1.1.0] - 2023-12-16

Added

  • Options for changing model, temperature and top_p parameters. Thanks to @Entaigner for adding this.

[1.0.2] - 2023-10-25

Changed

  • Switched model to codellama:7b-code-q4_K_S from codellama:7b-code. This noticeably increases generation speed.

Fixed

  • Ollama server seemingly not starting when triggering generation for the first time.

[1.0.1] - 2023-10-23

Added

  • Additional usage instructions in README.

Fixed

  • Escape key locked to abort generation, causing other escape key functions, such as closing intellisense, to not work.
  • Cancel button in progress notification not working.

[1.0.0] - 2023-10-23

  • Initial release