Skip to content

Commit 2bca5a2

Browse files
authored
docs: fix typos in rag (#626)
## Overview <!-- Brief description of what documentation is being added/updated --> Fix typos in rag.mdx under OSS LangChain ## Type of change **Type:** Fix typo ## Related issues/PRs <!-- Link to related issues, feature PRs, or discussions (if applicable) To automatically close an issue when this PR is merged, use closing keywords: - "closes #123" or "fixes #123" or "resolves #123" For regular references without auto-closing, just use: - "#123" or "See issue #123" Examples: - closes #456 (will auto-close issue #456 when PR is merged) - See #789 for context (will reference but not auto-close issue #789) --> - GitHub issue: N/A - Feature PR: N/A <!-- For LangChain employees, if applicable: --> - Linear issue: N/A - Slack thread: N/A ## Checklist <!-- Put an 'x' in all boxes that apply --> - [x] I have read the [contributing guidelines](README.md) - [x] I have tested my changes locally using `docs dev` - [x] All code examples have been tested and work correctly - [x] I have used **root relative** paths for internal links - [x] I have updated navigation in `src/docs.json` if needed - [ ] I have gotten approval from the relevant reviewers - [ ] (Internal team members only / optional) I have created a preview deployment using the [Create Preview Branch workflow](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml) ## Additional notes <!-- Any other information that would be helpful for reviewers -->
1 parent 336e714 commit 2bca5a2

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

src/oss/langchain/rag.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ Select a vector store:
127127

128128
## Preview
129129

130-
In this guide well build an app that answers questions about the website's content. The specific website we will use is the [LLM Powered Autonomous
130+
In this guide we'll build an app that answers questions about the website's content. The specific website we will use is the [LLM Powered Autonomous
131131
Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post
132132
by Lilian Weng, which allows us to ask questions about the contents of
133133
the post.
@@ -289,7 +289,7 @@ trace](https://smith.langchain.com/public/a117a1f8-c96c-4c16-a285-00b85646118e/r
289289

290290
## Detailed walkthrough
291291

292-
Lets go through the above code step-by-step to really understand whats
292+
Let's go through the above code step-by-step to really understand what's
293293
going on.
294294

295295
## 1. Indexing
@@ -323,15 +323,15 @@ objects.
323323

324324

325325
:::python
326-
In this case well use the
326+
In this case we'll use the
327327
[WebBaseLoader](/oss/integrations/document_loaders/web_base),
328328
which uses `urllib` to load HTML from web URLs and `BeautifulSoup` to
329329
parse it to text. We can customize the HTML -\> text parsing by passing
330330
in parameters into the `BeautifulSoup` parser via `bs_kwargs` (see
331331
[BeautifulSoup
332332
docs](https://beautiful-soup-4.readthedocs.io/en/latest/#beautifulsoup)).
333333
In this case only HTML tags with class “post-content”, “post-title”, or
334-
“post-header” are relevant, so well remove all others.
334+
“post-header” are relevant, so we'll remove all others.
335335

336336
```python
337337
import bs4
@@ -410,7 +410,7 @@ into the context window of many models. Even for those models that could
410410
fit the full post in their context window, models can struggle to find
411411
information in very long inputs.
412412

413-
To handle this well split the `Document` into chunks for embedding and
413+
To handle this we'll split the `Document` into chunks for embedding and
414414
vector storage. This should help us retrieve only the most relevant parts
415415
of the blog post at run time.
416416

@@ -516,7 +516,7 @@ RAG applications commonly work as follows:
516516

517517
![retrieval_diagram](/images/rag_retrieval_generation.png)
518518

519-
Now lets write the actual application logic. We want to create a simple
519+
Now let's write the actual application logic. We want to create a simple
520520
application that takes a user question, searches for documents relevant
521521
to that question, passes the retrieved documents and initial question to
522522
a model, and returns an answer.
@@ -744,7 +744,7 @@ for more advanced formulations.
744744

745745
In the above [agentic RAG](#rag-agents) formulation we allow the LLM to use its discretion in
746746
generating a [tool call](/oss/langchain/models#tool-calling) to help answer user queries. This
747-
is a good general purpose solution, but comes with some trade-offs:
747+
is a good general-purpose solution, but comes with some trade-offs:
748748

749749
| ✅ Benefits | ⚠️ Drawbacks |
750750
|-----------------------------------------------------------------------------|----------------------------------------------------------------------------|
@@ -775,7 +775,7 @@ def prompt_with_context(state: AgentState) -> list[MessageLikeRepresentation]:
775775
docs_content = "\n\n".join(doc.page_content for doc in retrieved_docs)
776776

777777
system_message = (
778-
"You are a helpful assistant. Use the following context in your reseponse:"
778+
"You are a helpful assistant. Use the following context in your response:"
779779
f"\n\n{docs_content}"
780780
)
781781

0 commit comments

Comments
 (0)