From 5d7cb0064090b999b49d37bdf63f0b1ca1ce3830 Mon Sep 17 00:00:00 2001 From: silvia Date: Tue, 11 Jun 2024 21:52:08 +0800 Subject: [PATCH 1/4] Update advanced RAG in README.md --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 101047c..483b2ef 100644 --- a/README.md +++ b/README.md @@ -352,10 +352,11 @@ With RAG, LLMs retrieves contextual documents from a database to improve the acc Real-life applications can require complex pipelines, including SQL or graph databases, as well as automatically selecting relevant tools and APIs. These advanced techniques can improve a baseline solution and provide additional features. -* **Query construction**: Structured data stored in traditional databases requires a specific query language like SQL, Cypher, metadata, etc. We can directly translate the user instruction into a query to access the data with query construction. +* **Query construction**: Structured data stored in traditional databases requires a specific query language like SQL, Cypher, metadata, etc. We can directly translate the user instruction into a query to access the data with query construction. * **Agents and tools**: Agents augment LLMs by automatically selecting the most relevant tools to provide an answer. These tools can be as simple as using Google or Wikipedia, or more complex like a Python interpreter or Jira. -* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. +* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course RAG From Scratch here (https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). * **Program LLMs**: Frameworks like [DSPy](https://github.com/stanfordnlp/dspy) allow you to optimize prompts and weights based on automated evaluations in a programmatic way. +* **LLM routing**: Construct adaptive RAG structure with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. (https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) 📚 **References**: * [LangChain - Query Construction](https://blog.langchain.dev/query-construction/): Blog post about different types of query construction. @@ -364,6 +365,7 @@ Real-life applications can require complex pipelines, including SQL or graph dat * [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) by Lilian Weng: More theoretical article about LLM agents. * [LangChain - OpenAI's RAG](https://blog.langchain.dev/applying-openai-rag/): Overview of the RAG strategies employed by OpenAI, including post-processing. * [DSPy in 8 Steps](https://dspy-docs.vercel.app/docs/building-blocks/solving_your_task): General-purpose guide to DSPy introducing modules, signatures, and optimizers. +* [Langchain Adaptive RAG](https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag/):a strategy for RAG that unites (1) query analysis with (2) active / self-corrective RAG. --- ### 5. Inference optimization From 5fda9ff0af2516ec2457ca22727238ce0d28425f Mon Sep 17 00:00:00 2001 From: silvia Date: Tue, 11 Jun 2024 21:54:57 +0800 Subject: [PATCH 2/4] Update advanced RAG in README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 483b2ef..2bcfef0 100644 --- a/README.md +++ b/README.md @@ -354,9 +354,9 @@ Real-life applications can require complex pipelines, including SQL or graph dat * **Query construction**: Structured data stored in traditional databases requires a specific query language like SQL, Cypher, metadata, etc. We can directly translate the user instruction into a query to access the data with query construction. * **Agents and tools**: Agents augment LLMs by automatically selecting the most relevant tools to provide an answer. These tools can be as simple as using Google or Wikipedia, or more complex like a Python interpreter or Jira. -* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course RAG From Scratch here (https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). +* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course [RAG From Scratch] (https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). * **Program LLMs**: Frameworks like [DSPy](https://github.com/stanfordnlp/dspy) allow you to optimize prompts and weights based on automated evaluations in a programmatic way. -* **LLM routing**: Construct adaptive RAG structure with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. (https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) +* **LLM routing**: Construct [adaptive RAG structure](https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. 📚 **References**: * [LangChain - Query Construction](https://blog.langchain.dev/query-construction/): Blog post about different types of query construction. From 2bcd85bd64054c7b5de6dae8a9f35d684ed50179 Mon Sep 17 00:00:00 2001 From: silvia Date: Tue, 11 Jun 2024 21:57:06 +0800 Subject: [PATCH 3/4] Update advanced RAG in README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2bcfef0..7c94bc3 100644 --- a/README.md +++ b/README.md @@ -354,7 +354,7 @@ Real-life applications can require complex pipelines, including SQL or graph dat * **Query construction**: Structured data stored in traditional databases requires a specific query language like SQL, Cypher, metadata, etc. We can directly translate the user instruction into a query to access the data with query construction. * **Agents and tools**: Agents augment LLMs by automatically selecting the most relevant tools to provide an answer. These tools can be as simple as using Google or Wikipedia, or more complex like a Python interpreter or Jira. -* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course [RAG From Scratch] (https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). +* **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course [RAG From Scratch](https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). * **Program LLMs**: Frameworks like [DSPy](https://github.com/stanfordnlp/dspy) allow you to optimize prompts and weights based on automated evaluations in a programmatic way. * **LLM routing**: Construct [adaptive RAG structure](https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. From ebcc21d50635b9a625a7e0f8eeb170b6731ed51b Mon Sep 17 00:00:00 2001 From: silvia Date: Tue, 11 Jun 2024 22:25:48 +0800 Subject: [PATCH 4/4] Update RAG README.md --- README.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7c94bc3..8539045 100644 --- a/README.md +++ b/README.md @@ -356,7 +356,9 @@ Real-life applications can require complex pipelines, including SQL or graph dat * **Agents and tools**: Agents augment LLMs by automatically selecting the most relevant tools to provide an answer. These tools can be as simple as using Google or Wikipedia, or more complex like a Python interpreter or Jira. * **Post-processing**: Final step that processes the inputs that are fed to the LLM. It enhances the relevance and diversity of documents retrieved with re-ranking, [RAG-fusion](https://github.com/Raudaschl/rag-fusion), and classification. For more query structuring techniques, check out the Langchain course [RAG From Scratch](https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x). * **Program LLMs**: Frameworks like [DSPy](https://github.com/stanfordnlp/dspy) allow you to optimize prompts and weights based on automated evaluations in a programmatic way. -* **LLM routing**: Construct [adaptive RAG structure](https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. +* **LLM routing**: Construct [adaptive RAG structure](https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_adaptive_rag.ipynb) with self-check for relevance, correctness, and hallucination. Powered with LangGraph for control and routing. +* **Other Source of Retrieval**: RAG can also be powered with knowledge graphs using [Langchain](https://python.langchain.com/v0.1/docs/use_cases/graph/constructing/) or [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/query_engine/knowledge_graph_query_engine/). +This is a sample from [LinkedIn Research](https://arxiv.org/html/2404.17723v1) using Knowledge Graph with RAG for customer support assistant. Additionally, Langchain also enables [querying SQL database](https://python.langchain.com/v0.2/docs/integrations/toolkits/sql_database/). 📚 **References**: * [LangChain - Query Construction](https://blog.langchain.dev/query-construction/): Blog post about different types of query construction. @@ -365,7 +367,8 @@ Real-life applications can require complex pipelines, including SQL or graph dat * [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) by Lilian Weng: More theoretical article about LLM agents. * [LangChain - OpenAI's RAG](https://blog.langchain.dev/applying-openai-rag/): Overview of the RAG strategies employed by OpenAI, including post-processing. * [DSPy in 8 Steps](https://dspy-docs.vercel.app/docs/building-blocks/solving_your_task): General-purpose guide to DSPy introducing modules, signatures, and optimizers. -* [Langchain Adaptive RAG](https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag/):a strategy for RAG that unites (1) query analysis with (2) active / self-corrective RAG. +* [Langchain Adaptive RAG](https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag/): a strategy for RAG that unites (1) query analysis with (2) active / self-corrective RAG. +* [Usage with Knowledge Graph](https://blog.langchain.dev/enhancing-rag-based-applications-accuracy-by-constructing-and-leveraging-knowledge-graphs/): Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs. --- ### 5. Inference optimization