Releases: aws-solutions/generative-ai-application-builder-on-aws
Releases · aws-solutions/generative-ai-application-builder-on-aws
v1.3.1
v1.3.0
Added
- Support for SageMaker as an LLM provider through SageMaker inference endpoints.
- Ability to deploy both the deployment dashboard and use cases within a VPC, including bringing an existing VPC and allowing the solution to deploy one.
- Option to return and display the source documents that were referenced when generating a response in RAG use cases.
- New model-info API in the deployment dashboard stack which can retrieve available providers, models, and model info. Default parameters are now stored for each model and provider combination and are used to pre-populate values in the wizard.
Changed
- Refactoring of UI components in the deployment dashboard.
- Switch to poetry for Python package management, replacing requirements.txt files.
- Updates to Node and Python package versions.
v1.2.3
v1.2.2
Fixed
- Pinned
langchain-core
andlangchain-community
versions, fixing a test failure caused by unpinned versions in thelangchain
packages dependencies - Removed a race condition causing intermittent failures to deploy the UI infrastructure
Security
- Updated Node package versions to resolve security vulnerabilities
v1.2.1
v1.2.0
Added
- Support for Amazon Titan Text Lite, Anthropic Claude v2.1, Cohere Command models, and Meta Llama 2 Chat models
Changed
- Increase the cap on the max number of docs retrieved in the Amazon Kendra retriever (for RAG use cases) from 5 to 100, to match the API limit
Fixed
- Fix typo in UI deployment instructions (#26)
- Fix bug causing failures with dictionary type advanced model parameters
- Fixed bug causing erroneous error messages to appear to user in long running conversations
Security
- Updated Python and Node package versions to resolve security vulnerabilities