Releases: StanfordSpezi/SpeziLLM
Releases · StanfordSpezi/SpeziLLM
0.8.3
0.8.2
What's Changed
- Update Llama.cpp & Llama 3 Support by @PSchmiedmayer & @vishnuravi in #55
Full Changelog: 0.8.1...0.8.2
0.8.1
What's Changed
- ⬆️ Bump express from 4.18.3 to 4.19.2 in /FogNode/auth by @dependabot in #53
New Contributors
- @dependabot made their first contribution in #53
Full Changelog: 0.8.0...0.8.1
0.8.0
What's Changed
- Add SpeziLLMFog that performs dynamic LLM job dispatch to fog nodes by @philippzagar in #52
Full Changelog: 0.7.2...0.8.0
0.7.2
What's Changed
- Fixes model selection response in
LLMOpenAIModelOnboardingStep
picker by @vishnuravi in #50 - Added exportFormat parameter to LLMChatView by @nriedman in #48
New Contributors
Full Changelog: 0.7.1...0.7.2
0.7.1
What's Changed
- MacOS / VisionOS support and ability to run Google Gemma LLMs by @philippzagar in #47
Full Changelog: 0.7.0...0.7.1
0.7.0
What's Changed
- Structural improvements to SpeziLLM by @philippzagar in #45
Full Changelog: 0.6.1...0.7.0
0.6.1
What's Changed
- Custom prompt formatting closures, documentation enhancements, lifting to llama.cpp upstream by @philippzagar in #43
- Fix for empty OpenAI functions array by @philippzagar in #44
Full Changelog: 0.6.0...0.6.1
0.6.0
0.5.0
What's Changed
- SpeziLLM Remote OpenAI integration by @philippzagar in #41
Full Changelog: 0.4.0...0.5.0