Custom LLM using huggingface(transformers). To build it locally, follow the given steps.
docker build -t rag-chat-test .
docker run rag-chat-test
Custom wrapper to extract important features from PDFs.
- extract data into text files
- extract based on topics in index
- extract based on page numbers
- create sep folder for each book
- create seperate text files for each topic
Embeds the extarcted text into numerical vectors for model to understand. Uses langchain
to create the custom embedder, with an In-Memory Vectore Store.
Rag chat app deployment to AWS,
aws ecr get-login-password --region {region} | docker login --username AWS --password-stdin {username}.dkr.ecr.{region}.amazonaws.com
aws ecr create-repository --repository-name rag-chat-cohere --region {region}
docker tag rag-chat-test:latest {username}.dkr.ecr.{region}.amazonaws.com/rag-chat-cohere
docker push {username}.dkr.ecr.{region}.amazonaws.com/rag-chat-cohere