Skip to content

v0.2.1: Summarization with Local LLM

Latest
Compare
Choose a tag to compare
@Endle Endle released this 21 Oct 01:02
· 3 commits to master since this release
9967d1b

0.2.1

New feature: Note Summarization with Local LLM.

What happens locally, what stays locally. Your notes will be sent to a local language model (Mistral-7B). The model will summarize hits, and expand the summary into search engine results.

Run server with local LLM

fireSeqSearch facilitates llamafile by Mozilla.

mkdir -pv ~/.llamafile && cd ~/.llamafile
wget https://huggingface.co/Mozilla/Mistral-7B-Instruct-v0.2-llamafile/resolve/main/mistral-7b-instruct-v0.2.Q4_0.llamafile?download=true
chmod +x mistral-7b-instruct-v0.2.Q4_0.llamafile

After that, compile and run fireSeqSearch with LLM

cargo build --features llm
target/debug/fire_seq_search_server --notebook_path ~/logseq
# Obsidian users
target/debug/fire_seq_search_server --notebook_path ~/obsidian --obsidian-md

Finally, update the Firefox Addon.

Demo Video

2024-09-21_15-04-38.mp4

This demo used AstroWiki, which is licensed under MIT license.