Skip to content

Commit 9f08cf8

Browse files
sydney-runklechristian-bromannCopilot
authored
langchain: dynamic system prompt w/ middleware docs (#592)
In Python we're now exposing decorators that can be used to generate simple middlewares w/ one hook. TBD what we're doing in JS. --------- Co-authored-by: Christian Bromann <[email protected]> Co-authored-by: Copilot <[email protected]>
1 parent 2bca5a2 commit 9f08cf8

File tree

2 files changed

+223
-8
lines changed

2 files changed

+223
-8
lines changed

src/oss/langchain/agents.mdx

Lines changed: 90 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -128,10 +128,10 @@ Model instances give you complete control over configuration. Use them when you
128128

129129
#### Dynamic model
130130

131-
:::python
132-
133131
Dynamic models are selected at <Tooltip tip="The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent's execution (e.g., user IDs, session details, or application-specific configuration).">runtime</Tooltip> based on the current <Tooltip tip="The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g., user preferences or tool usage stats).">state</Tooltip> and context. This enables sophisticated routing logic and cost optimization.
134132

133+
:::python
134+
135135
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bind_tools(tools)`, where `tools` is a subset of the `tools` parameter.
136136

137137
```python
@@ -153,11 +153,6 @@ agent = create_agent(select_model, tools=tools)
153153
```
154154
:::
155155
:::js
156-
<Info>
157-
**`state`**: The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats).
158-
</Info>
159-
160-
Dynamic models are selected at runtime based on the current state and context. This enables sophisticated routing logic and cost optimization.
161156

162157
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bindTools(tools)`, where `tools` is a subset of the `tools` parameter.
163158

@@ -465,8 +460,95 @@ const agent = createAgent({
465460

466461
When no `prompt` is provided, the agent will infer its task from the messages directly.
467462

463+
#### Dynamic prompts with middleware
464+
465+
:::python
466+
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modify_model_request` decorator to create a simple custom middleware.
467+
:::
468+
:::js
469+
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modifyModelRequest` decorator to create a simple custom middleware.
470+
:::
471+
472+
Dynamic system prompt is especially useful for personalizing prompts based on user roles, conversation context, or other changing factors:
473+
474+
:::python
475+
```python wrap
476+
from langchain.agents import create_agent, AgentState
477+
from langchain.agents.middleware.types import modify_model_request
478+
from langgraph.runtime import Runtime
479+
from typing import TypedDict
480+
481+
class Context(TypedDict):
482+
user_role: str
483+
484+
@modify_model_request
485+
def dynamic_system_prompt(state: AgentState, request: ModelRequest, runtime: Runtime[Context]) -> ModelRequest:
486+
user_role = runtime.context.get("user_role", "user")
487+
base_prompt = "You are a helpful assistant."
488+
489+
if user_role == "expert":
490+
prompt = f"{base_prompt} Provide detailed technical responses."
491+
elif user_role == "beginner":
492+
prompt = f"{base_prompt} Explain concepts simply and avoid jargon."
493+
else:
494+
prompt = base_prompt
495+
496+
request.system_prompt = prompt
497+
return request
498+
499+
agent = create_agent(
500+
model="openai:gpt-4o",
501+
tools=tools,
502+
middleware=[dynamic_system_prompt],
503+
)
504+
505+
# The system prompt will be set dynamically based on context
506+
result = agent.invoke(
507+
{"messages": [{"role": "user", "content": "Explain machine learning"}]},
508+
{"context": {"user_role": "expert"}}
509+
)
510+
```
511+
:::
512+
513+
:::js
514+
```typescript wrap
515+
import { z } from "zod";
516+
import { createAgent } from "langchain";
517+
import { dynamicSystemPromptMiddleware } from "langchain/middleware";
518+
519+
const contextSchema = z.object({
520+
userRole: z.enum(["expert", "beginner"]),
521+
});
522+
523+
const agent = createAgent({
524+
model: "openai:gpt-4o",
525+
tools: [/* ... */],
526+
contextSchema,
527+
middleware: [
528+
dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>((state, runtime) => {
529+
const userRole = runtime.context.userRole || "user";
530+
const basePrompt = "You are a helpful assistant.";
531+
532+
if (userRole === "expert") {
533+
return `${basePrompt} Provide detailed technical responses.`;
534+
} else if (userRole === "beginner") {
535+
return `${basePrompt} Explain concepts simply and avoid jargon.`;
536+
}
537+
return basePrompt;
538+
}),
539+
],
540+
});
541+
542+
// The system prompt will be set dynamically based on context
543+
const result = await agent.invoke(
544+
{ messages: [{ role: "user", content: "Explain machine learning" }] },
545+
{ context: { userRole: "expert" } }
546+
);
547+
```
548+
:::
549+
468550
<Tip>
469-
For more details on message types and formatting, see [Messages](/oss/langchain/messages).
551+
For more details on message types and formatting, see [Messages](/oss/langchain/messages). For comprehensive middleware documentation, see [Middleware](/oss/langchain/middleware).
470552
</Tip>
471553

472554
## Advanced configuration

src/oss/langchain/middleware.mdx

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,7 @@ LangChain provides several built in middleware to use off-the-shelf
151151
- [Summarization](#summarization)
152152
- [Human-in-the-loop](#human-in-the-loop)
153153
- [Anthropic prompt caching](#anthropic-prompt-caching)
154+
- [Dynamic system prompt](#dynamic-system-prompt)
154155

155156
### Summarization
156157

@@ -730,6 +731,138 @@ const result = await agent.invoke({
730731
```
731732
:::
732733

734+
### Dynamic system prompt
735+
736+
:::python
737+
A system prompt can be dynamically set right before each model invocation using the `@modify_model_request` decorator. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
738+
739+
For example, you can adjust the system prompt based on the user's expertise level:
740+
741+
```python
742+
from typing import TypedDict
743+
744+
from langchain.agents import create_agent, AgentState
745+
from langchain.agents.middleware.types import modify_model_request
746+
from langgraph.runtime import Runtime
747+
748+
class Context(TypedDict):
749+
user_role: str
750+
751+
@modify_model_request
752+
def dynamic_system_prompt(state: AgentState, request: ModelRequest, runtime: Runtime[Context]) -> ModelRequest:
753+
user_role = runtime.context.get("user_role", "user")
754+
base_prompt = "You are a helpful assistant."
755+
756+
if user_role == "expert":
757+
prompt = f"{base_prompt} Provide detailed technical responses."
758+
elif user_role == "beginner":
759+
prompt = f"{base_prompt} Explain concepts simply and avoid jargon."
760+
else:
761+
prompt = base_prompt
762+
763+
request.system_prompt = prompt
764+
return request
765+
766+
agent = create_agent(
767+
model="openai:gpt-4o",
768+
tools=[web_search],
769+
middleware=[dynamic_system_prompt],
770+
)
771+
772+
# Use with context
773+
result = agent.invoke(
774+
{"messages": [{"role": "user", "content": "Explain async programming"}]},
775+
{"context": {"user_role": "expert"}}
776+
)
777+
```
778+
:::
779+
:::js
780+
781+
A system prompt can be dynamically set right before each model invocation using the `dynamicSystemPromptMiddleware` middleware. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
782+
783+
For example, you can adjust the system prompt based on the user's expertise level:
784+
785+
```typescript
786+
import { z } from "zod";
787+
import { createAgent } from "langchain";
788+
import { dynamicSystemPromptMiddleware } from "langchain/middleware";
789+
790+
const contextSchema = z.object({
791+
userRole: z.enum(["expert", "beginner"]),
792+
});
793+
794+
const agent = createAgent({
795+
model: "openai:gpt-4o",
796+
tools: [...],
797+
contextSchema,
798+
middleware: [
799+
dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>((state, runtime) => {
800+
const userRole = runtime.context.userRole || "user";
801+
const basePrompt = "You are a helpful assistant.";
802+
803+
if (userRole === "expert") {
804+
return `${basePrompt} Provide detailed technical responses.`;
805+
} else if (userRole === "beginner") {
806+
return `${basePrompt} Explain concepts simply and avoid jargon.`;
807+
}
808+
return basePrompt;
809+
}),
810+
],
811+
});
812+
813+
// The system prompt will be set dynamically based on context
814+
const result = await agent.invoke(
815+
{ messages: [{ role: "user", content: "Explain async programming" }] },
816+
{ context: { userRole: "expert" } }
817+
);
818+
```
819+
:::
820+
821+
Alternatively, you can adjust the system prompt based on the conversation length:
822+
823+
:::python
824+
```python
825+
from langchain.agents.middleware.types import modify_model_request
826+
827+
@modify_model_request
828+
def simple_prompt(state: AgentState, request: ModelRequest) -> ModelRequest:
829+
message_count = len(state["messages"])
830+
831+
if message_count > 10:
832+
prompt = "You are in an extended conversation. Be more concise."
833+
else:
834+
prompt = "You are a helpful assistant."
835+
836+
request.system_prompt = prompt
837+
return request
838+
839+
agent = create_agent(
840+
model="openai:gpt-4o",
841+
tools=[search_tool],
842+
middleware=[simple_prompt],
843+
)
844+
```
845+
:::
846+
847+
:::js
848+
```typescript
849+
const agent = createAgent({
850+
model: "openai:gpt-4o",
851+
tools: [searchTool],
852+
middleware: [
853+
dynamicSystemPromptMiddleware((state) => {
854+
const messageCount = state.messages.length;
855+
856+
if (messageCount > 10) {
857+
return "You are in an extended conversation. Be more concise.";
858+
}
859+
return "You are a helpful assistant.";
860+
}),
861+
],
862+
});
863+
```
864+
:::
865+
733866
## Custom Middleware
734867

735868
Middleware for agents are subclasses of `AgentMiddleware`, which implement one or more of its hooks.

0 commit comments

Comments
 (0)