Releases: steamship-core/steamship-langchain
0.0.14
⚠️ Breaking Changes ⚠️
- Removed
memory.ConversationBufferWindowMemory
andmemory.ConversationBufferMemory
. They have been replaced withmemory.ChatMessageHistory
.
In order to keep up with the changes in upstream LangChain, we converted our Memory classes to a single ChatMessageHistory
that may be used with the upstream conversational memory classes. This is a bit unfortunate, but should allow us to continue to stay in-step with the latest LangChain developments.
We apologize for the impact.
What's Changed
- loaders: support allowed failures in bulk operations by @douglas-reid in #23
- feat: add support for direct File import to VectorStore by @douglas-reid in #25
- feat: add file loader for sphinx-based sites by @douglas-reid in #26
- deps: update to match latest LC memory refactor by @douglas-reid in #28
- feat: allow sphinx loader to sanitize and ignore by @douglas-reid in #27
Full Changelog: 0.0.13...0.0.14
v0.0.13 - Hotfix for VectorStore deployments
What's Changed
- Hotfix steamship vector store by @EniasCailliau in #22
Full Changelog: 0.0.12...0.0.13
v0.0.12
What's Changed
- feat: add logging callback by @douglas-reid in #18
- docs: add information on logging callback by @steamship-developers in #20
Full Changelog: 0.0.11...0.0.12
Add logging callback with doc fixes
Merge pull request #16 from steamship-core/doc-fix docs: fix document_loaders index
v0.0.11 - VectorStore and Loaders
What's Changed
- Update embedding model to text-embedding-ada-002 by @EniasCailliau in #8
- Enable in place replacement of
langchain.OpenAI
withsteamship_langchain.OpenAI
by @EniasCailliau in #4 - First version of a SteamshipVectorStore by @EniasCailliau in #5
- feat: add initial set of file_loaders by @douglas-reid in #10
- feat: readthedocs support by @douglas-reid in #14
New Contributors
- @EniasCailliau made their first contribution in #8
Full Changelog: 0.0.10...0.0.11
Full OpenAI support
This release updates the LLM support to provide a full drop-in replacement for the OpenAI LLM in LangChain. This allows users to use the Steamship backend for LLM calls with only a package name change. Thanks @EniasCailliau for the contribution.
Better support for plugin retry logic in client
Includes more forgiving wait()
time in Task handling around generation. This will work better with the updated retry / backoff behavior in the LLM plugin.