You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
langchain: dynamic system prompt w/ middleware docs (#592)
In Python we're now exposing decorators that can be used to generate
simple middlewares w/ one hook.
TBD what we're doing in JS.
---------
Co-authored-by: Christian Bromann <[email protected]>
Co-authored-by: Copilot <[email protected]>
Copy file name to clipboardExpand all lines: src/oss/langchain/agents.mdx
+90-8Lines changed: 90 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -128,10 +128,10 @@ Model instances give you complete control over configuration. Use them when you
128
128
129
129
#### Dynamic model
130
130
131
-
:::python
132
-
133
131
Dynamic models are selected at <Tooltiptip="The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent's execution (e.g., user IDs, session details, or application-specific configuration).">runtime</Tooltip> based on the current <Tooltiptip="The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g., user preferences or tool usage stats).">state</Tooltip> and context. This enables sophisticated routing logic and cost optimization.
134
132
133
+
:::python
134
+
135
135
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bind_tools(tools)`, where `tools` is a subset of the `tools` parameter.
**`state`**: The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats).
158
-
</Info>
159
-
160
-
Dynamic models are selected at runtime based on the current state and context. This enables sophisticated routing logic and cost optimization.
161
156
162
157
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bindTools(tools)`, where `tools` is a subset of the `tools` parameter.
163
158
@@ -465,8 +460,95 @@ const agent = createAgent({
465
460
466
461
When no `prompt` is provided, the agent will infer its task from the messages directly.
467
462
463
+
#### Dynamic prompts with middleware
464
+
465
+
:::python
466
+
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modify_model_request` decorator to create a simple custom middleware.
467
+
:::
468
+
:::js
469
+
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modifyModelRequest` decorator to create a simple custom middleware.
470
+
:::
471
+
472
+
Dynamic system prompt is especially useful for personalizing prompts based on user roles, conversation context, or other changing factors:
473
+
474
+
:::python
475
+
```python wrap
476
+
from langchain.agents import create_agent, AgentState
477
+
from langchain.agents.middleware.types import modify_model_request
For more details on message types and formatting, see [Messages](/oss/langchain/messages).
551
+
For more details on message types and formatting, see [Messages](/oss/langchain/messages). For comprehensive middleware documentation, see [Middleware](/oss/langchain/middleware).
@@ -730,6 +731,138 @@ const result = await agent.invoke({
730
731
```
731
732
:::
732
733
734
+
### Dynamic system prompt
735
+
736
+
:::python
737
+
A system prompt can be dynamically set right before each model invocation using the `@modify_model_request` decorator. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
738
+
739
+
For example, you can adjust the system prompt based on the user's expertise level:
740
+
741
+
```python
742
+
from typing import TypedDict
743
+
744
+
from langchain.agents import create_agent, AgentState
745
+
from langchain.agents.middleware.types import modify_model_request
A system prompt can be dynamically set right before each model invocation using the `dynamicSystemPromptMiddleware` middleware. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
782
+
783
+
For example, you can adjust the system prompt based on the user's expertise level:
0 commit comments