Scalable RAG solutions/Agentic Workflows with Amazon Bedrock and Amazon Opensearch serverless service
Widespread AI adoption is being driven by generative AI models that can generate human-like content. However, these foundation models are trained on general data making it less effective for domain specific tasks. There lies the importance of Retrieval Augmented Generation (RAG). RAG allows augmenting prompts with relevant external data for better domain-specific outputs. With RAG, documents and queries are converted to embeddings, compared to find relevant context, and that context is appended to the original prompt before being passed to the LLM. Knowledge libraries can be updated asynchronously to provide the most relevant external data for augmenting prompts.
Amazon Opensearch Serverless(AOSS) offers vector engine to store embeddings for faster similarity searches. The vector engine provides a simple, scalable, and high-performing similarity search capability in Amazon OpenSearch Serverless that makes it easy for you to build generative artificial intelligence (AI) applications without having to manage the underlying vector database infrastructure.
Note
This repository offers a production ready easily deployable Generative AI solution with the below features:
- Document chat
- Multi-Agent collaboration
- Sentiment Analysis
- PII Redaction
- OCR
Important
The Older UI is maintained in the v0.0.1(Old-UI) branch.
PII Redaction
Sentiment Analysis
Latest project updates
- 08-Nov-2024 Supports Claude-3.5 Haiku for RAG/OCR/PII Identification/Sentiment Analysis
- 29-Oct-2024 Supports Claude-3.5 Sonnet V2/Opus for RAG/OCR/PII Identification/Sentiment Analysis
- 1-Sept-204 Document Aware chunking strategy, to answer questions comparing several documents. For example: What did I say in Doc 1 that I contradict in Doc 7 ?
Prerequisites
Section 1: Create an Admin User to deploy this stack
Section 1 - Create an IAM user with Administrator permissions (OPTIONAL: If you're already an Admin role, you may skip this step)
- Search for the service IAM on the AWS Console and go the IAM Dashboard and click on “Roles“ tab under ”Access Management” and Click on “Create Role”
- Select AWS Account and click “Next“
- Under permissions select Administrator access
-
You can now assume this role and proceed to deploy the stack. Click on Switch-Role
- Switch role
- Proceed to Section 2
Section 2 - Deploy the RAG based Solution (Total deployment time 40 minutes)
Section 2 - Deploy this RAG based Solution (The below commands should be executed in the region of deployment)
-
Switch to Admin role. Search for Cloudshell service on the AWS Console and follow the steps below to clone the github repository
-
Git Clone the serverless-rag-demo repository from aws-samples
git clone https://github.com/aws-samples/serverless-rag-demo.git
-
Go to the directory where we have the downloaded files.
cd serverless-rag-demo
-
Fire the bash script that creates the RAG based solution. Pass the environment and region for deployment. environment can be dev,qa,sandbox. Look at Prerequisites to deploy to the correct region.
sh creator.sh
-
Press Enter to proceed with deployment of the stack or ctrl+c to exit
-
The UI is hosted on AppRunner the link to AppRunner could be found in CloudShell once the script execution is complete, or you could also go to the AppRunner service on the AWS Console and obtain the https url. The UI is authenticated through Amazon Cognito hence the very first time you would have to sign-up and then sign-in to login to the application
(ADVANCED) Using an existing Bedrock Knowledge base
[!IMPORTANT] You could query your existing Knowledge base created on Amazon Bedrock provided it uses Amazon Opensearch Serverless service.
-
Get the Collection ARN and the embedding model used by your Knowledge base on Bedrock
-
Head to Amazon Opensearch Serverless and search by ARN to fetch Opensearch Endpoint
-
Modify the configurations of your
bedrock_rag_query_*
lambda function. Set the below a. IS_BEDROCK_KB = yes
b. OPENSEARCH_VECTOR_ENDPOINT = <> c. EMBED_MODEL_ID = <>. Find the base model Id from here (https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html) d. VECTOR_INDEX_NAME = <<VECTOR_INDEX used by your Bedrock KB>> e. BEDROCK_KB_EMBEDDING_KEY = <> -
Head to Amazon Opensearch on the AWS Console and click on Data Access Policies. Search for the Data Access Policy attached to your Bedrock KB and click on the
Edit
button -
In the principal section add the ARN of your Lambda role and hit save
-
Now try Document Chat on the UI, it should query from your Amazon Bedrock Knowledge base.
[!IMPORTANT] We do not support indexing to an existing Knowledge base. That can be done through the Amazon Bedrock Console.