From c63281c9619f69315e434c56c7b17ee443283079 Mon Sep 17 00:00:00 2001
From: iusztinpaul
Date: Thu, 8 Feb 2024 14:42:01 +0200
Subject: [PATCH 1/3] docs: Add articles
---
README.md | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/README.md b/README.md
index aaa135f..becb361 100644
--- a/README.md
+++ b/README.md
@@ -19,8 +19,9 @@
- [2.5. AWS](#25-aws)
- [3. Install & Usage](#3-install--usage)
- [4. Video lectures](#4-video-lectures)
-- [5. License](#5-license)
-- [6. Contributors & Teachers](#6-contributors--teachers)
+- [5. Articles](#5-articles)
+- [6. License](#6-license)
+- [7. Contributors & Teachers](#7-contributors--teachers)
------
@@ -223,11 +224,25 @@ Thus, check out the README for every module individually to see how to install &
-## 5. License
+## 5. Articles
+
+To understand the entire code step-by-step, check out our articles ↓
+
+- [Lesson 1 | System Design: The LLMs Kit: Build a Production-Ready Real-Time Financial Advisor System Using Streaming Pipelines, RAG, and LLMOps](https://medium.com/decodingml/the-llms-kit-build-a-production-ready-real-time-financial-advisor-system-using-streaming-ffdcb2b50714)
+- [Lesson 2 | Feature pipeline: Why you must choose streaming over batch pipelines when doing RAG in LLM applications](https://medium.com/decoding-ml/why-you-must-choose-streaming-over-batch-pipelines-when-doing-rag-in-llm-applications-3b6fd32a93ff)
+- [Lesson 3 | Feature pipeline: This is how you can build & deploy a streaming pipeline to populate a vector DB for real-time RAG](https://medium.com/decodingml/this-is-how-you-can-build-deploy-a-streaming-pipeline-to-populate-a-vector-db-for-real-time-rag-c92cfbbd4d62)
+- [Lesson 4 | Training pipeline: 5 concepts that must be in your LLM fine-tuning kit](https://medium.com/decodingml/5-concepts-that-must-be-in-your-llm-fine-tuning-kit-59183c7ce60e)
+- [Lesson 5 | Training pipeline: The secret of writing generic code to fine-tune any LLM using QLoRA](https://medium.com/decodingml/the-secret-of-writing-generic-code-to-fine-tune-any-llm-using-qlora-9b1822f3c6a4)
+- [Lesson 6 | Training pipeline: From LLM development to continuous training pipelines using LLMOps](https://medium.com/decodingml/from-llm-development-to-continuous-training-pipelines-using-llmops-a3792b05061c)
+- [Lesson 7 | Inference pipeline: Design a RAG LangChain application leveraging the 3-pipeline architecture](https://medium.com/decodingml/design-a-rag-langchain-application-leveraging-the-3-pipeline-architecture-46bcc3cb3500)
+- [Lesson 8 | Inference pipeline: Prepare your RAG LangChain application for production](https://medium.com/decodingml/prepare-your-rag-langchain-application-for-production-5f75021cd381)
+
+
+## 6. License
This course is an open-source project released under the MIT license. Thus, as long you distribute our LICENSE and acknowledge our work, you can safely clone or fork this project and use it as a source of inspiration for whatever you want (e.g., university projects, college degree projects, etc.).
-## 6. Contributors & Teachers
+## 7. Contributors & Teachers
From f088c0eeb0aea0cf64e839dd4567e4e9a5616d66 Mon Sep 17 00:00:00 2001
From: iusztinpaul
Date: Thu, 8 Feb 2024 14:44:18 +0200
Subject: [PATCH 2/3] docs: prettify articles
---
README.md | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/README.md b/README.md
index becb361..bc8ea76 100644
--- a/README.md
+++ b/README.md
@@ -228,14 +228,21 @@ Thus, check out the README for every module individually to see how to install &
To understand the entire code step-by-step, check out our articles ↓
-- [Lesson 1 | System Design: The LLMs Kit: Build a Production-Ready Real-Time Financial Advisor System Using Streaming Pipelines, RAG, and LLMOps](https://medium.com/decodingml/the-llms-kit-build-a-production-ready-real-time-financial-advisor-system-using-streaming-ffdcb2b50714)
-- [Lesson 2 | Feature pipeline: Why you must choose streaming over batch pipelines when doing RAG in LLM applications](https://medium.com/decoding-ml/why-you-must-choose-streaming-over-batch-pipelines-when-doing-rag-in-llm-applications-3b6fd32a93ff)
-- [Lesson 3 | Feature pipeline: This is how you can build & deploy a streaming pipeline to populate a vector DB for real-time RAG](https://medium.com/decodingml/this-is-how-you-can-build-deploy-a-streaming-pipeline-to-populate-a-vector-db-for-real-time-rag-c92cfbbd4d62)
-- [Lesson 4 | Training pipeline: 5 concepts that must be in your LLM fine-tuning kit](https://medium.com/decodingml/5-concepts-that-must-be-in-your-llm-fine-tuning-kit-59183c7ce60e)
-- [Lesson 5 | Training pipeline: The secret of writing generic code to fine-tune any LLM using QLoRA](https://medium.com/decodingml/the-secret-of-writing-generic-code-to-fine-tune-any-llm-using-qlora-9b1822f3c6a4)
-- [Lesson 6 | Training pipeline: From LLM development to continuous training pipelines using LLMOps](https://medium.com/decodingml/from-llm-development-to-continuous-training-pipelines-using-llmops-a3792b05061c)
-- [Lesson 7 | Inference pipeline: Design a RAG LangChain application leveraging the 3-pipeline architecture](https://medium.com/decodingml/design-a-rag-langchain-application-leveraging-the-3-pipeline-architecture-46bcc3cb3500)
-- [Lesson 8 | Inference pipeline: Prepare your RAG LangChain application for production](https://medium.com/decodingml/prepare-your-rag-langchain-application-for-production-5f75021cd381)
+### System design
+- [Lesson 1: The LLMs Kit: Build a Production-Ready Real-Time Financial Advisor System Using Streaming Pipelines, RAG, and LLMOps](https://medium.com/decodingml/the-llms-kit-build-a-production-ready-real-time-financial-advisor-system-using-streaming-ffdcb2b50714)
+
+### Feature pipeline
+- [Lesson 2: Why you must choose streaming over batch pipelines when doing RAG in LLM applications](https://medium.com/decoding-ml/why-you-must-choose-streaming-over-batch-pipelines-when-doing-rag-in-llm-applications-3b6fd32a93ff)
+- [Lesson 3: This is how you can build & deploy a streaming pipeline to populate a vector DB for real-time RAG](https://medium.com/decodingml/this-is-how-you-can-build-deploy-a-streaming-pipeline-to-populate-a-vector-db-for-real-time-rag-c92cfbbd4d62)
+
+### Training pipeline
+- [Lesson 4: 5 concepts that must be in your LLM fine-tuning kit](https://medium.com/decodingml/5-concepts-that-must-be-in-your-llm-fine-tuning-kit-59183c7ce60e)
+- [Lesson 5: The secret of writing generic code to fine-tune any LLM using QLoRA](https://medium.com/decodingml/the-secret-of-writing-generic-code-to-fine-tune-any-llm-using-qlora-9b1822f3c6a4)
+- [Lesson 6: From LLM development to continuous training pipelines using LLMOps](https://medium.com/decodingml/from-llm-development-to-continuous-training-pipelines-using-llmops-a3792b05061c)
+
+### Inference pipeline
+- [Lesson 7: Design a RAG LangChain application leveraging the 3-pipeline architecture](https://medium.com/decodingml/design-a-rag-langchain-application-leveraging-the-3-pipeline-architecture-46bcc3cb3500)
+- [Lesson 8: Prepare your RAG LangChain application for production](https://medium.com/decodingml/prepare-your-rag-langchain-application-for-production-5f75021cd381)
## 6. License
From 35f0d10e492513ea1f9f65c3d8fb4ea1bcf01ed0 Mon Sep 17 00:00:00 2001
From: iusztinpaul
Date: Thu, 8 Feb 2024 14:44:59 +0200
Subject: [PATCH 3/3] docs: prettify articles
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index bc8ea76..1b82342 100644
--- a/README.md
+++ b/README.md
@@ -226,7 +226,7 @@ Thus, check out the README for every module individually to see how to install &
## 5. Articles
-To understand the entire code step-by-step, check out our articles ↓
+`To understand the entire code step-by-step, check out our articles` ↓
### System design
- [Lesson 1: The LLMs Kit: Build a Production-Ready Real-Time Financial Advisor System Using Streaming Pipelines, RAG, and LLMOps](https://medium.com/decodingml/the-llms-kit-build-a-production-ready-real-time-financial-advisor-system-using-streaming-ffdcb2b50714)