Skip to content

Commit

Permalink
feat (ai/core): add streamText sendStart & sendFinish data stream opt…
Browse files Browse the repository at this point in the history
…ions (#5047)

Co-authored-by: Nico Albanese <[email protected]>
  • Loading branch information
lgrammel and nicoalbanese authored Mar 4, 2025
1 parent 49ee53a commit 0cb2647
Show file tree
Hide file tree
Showing 10 changed files with 393 additions and 12 deletions.
5 changes: 5 additions & 0 deletions .changeset/proud-cougars-suffer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': patch
---

feat (ai/core): add streamText sendStart & sendFinish data stream options
105 changes: 105 additions & 0 deletions content/cookbook/01-next/24-stream-text-multistep.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
title: streamText Multi-Step Cookbook
description: Learn how to create several streamText steps with different settings
tags: ['next', 'streaming']
---

# Stream Text Multi-Step

You may want to have different steps in your stream where each step has different settings,
e.g. models, tools, or system prompts.

With `createDataStreamResponse` and `sendFinish` / `sendStart` options when merging
into the data stream, you can control when the finish and start events are sent to the client,
allowing you to have different steps in a single assistant UI message.

## Server

```typescript filename='app/api/chat/route.ts'
import { openai } from '@ai-sdk/openai';
import { createDataStreamResponse, streamText, tool } from 'ai';
import { z } from 'zod';

export async function POST(req: Request) {
const { messages } = await req.json();

return createDataStreamResponse({
execute: async dataStream => {
// step 1 example: forced tool call
const result1 = streamText({
model: openai('gpt-4o-mini', { structuredOutputs: true }),
system: 'Extract the user goal from the conversation.',
messages,
toolChoice: 'required', // force the model to call a tool
tools: {
extractGoal: tool({
parameters: z.object({ goal: z.string() }),
execute: async ({ goal }) => goal, // no-op extract tool
}),
},
});

// forward the initial result to the client without the finish event:
result1.mergeIntoDataStream(dataStream, {
experimental_sendFinish: false, // omit the finish event
});

// note: you can use any programming construct here, e.g. if-else, loops, etc.
// workflow programming is normal programming with this approach.

// example: continue stream with forced tool call from previous step
const result2 = streamText({
// different system prompt, different model, no tools:
model: openai('gpt-4o'),
system:
'You are a helpful assistant with a different system prompt. Repeat the extract user goal in your answer.',
// continue the workflow stream with the messages from the previous step:
messages: [...messages, ...(await result1.response).messages],
});

// forward the 2nd result to the client (incl. the finish event):
result2.mergeIntoDataStream(dataStream, {
experimental_sendStart: false, // omit the start event
});
},
});
}
```

## Client

```tsx filename="app/page.tsx"
'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();

return (
<div>
{messages?.map(message => (
<div key={message.id}>
<strong>{`${message.role}: `}</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-invocation': {
return (
<pre key={index}>
{JSON.stringify(part.toolInvocation, null, 2)}
</pre>
);
}
}
})}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}
```
56 changes: 56 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2205,6 +2205,20 @@ To see `streamText` in action, check out [these examples](#examples).
description:
'Whether to send the sources information in the stream. Defaults to false.',
},
{
name: 'experimental_sendFinish',
type: 'boolean',
isOptional: true,
description:
'Send the finish event to the client. Set to false if you are using additional streamText calls that send additional data. Default to true.',
},
{
name: 'experimental_sendStart',
type: 'boolean',
isOptional: true,
description:
'Send the message start event to the client. Set to false if you are using additional streamText calls and the message start event has already been sent. Default to true.',
},
],
},
],
Expand Down Expand Up @@ -2282,6 +2296,20 @@ To see `streamText` in action, check out [these examples](#examples).
description:
'Whether to send the sources information in the stream. Defaults to false.',
},
{
name: 'experimental_sendFinish',
type: 'boolean',
isOptional: true,
description:
'Send the finish event to the client. Set to false if you are using additional streamText calls that send additional data. Default to true.',
},
{
name: 'experimental_sendStart',
type: 'boolean',
isOptional: true,
description:
'Send the message start event to the client. Set to false if you are using additional streamText calls and the message start event has already been sent. Default to true.',
},
],
},
],
Expand Down Expand Up @@ -2347,6 +2375,20 @@ To see `streamText` in action, check out [these examples](#examples).
description:
'Whether to send the sources information in the stream. Defaults to false.',
},
{
name: 'experimental_sendFinish',
type: 'boolean',
isOptional: true,
description:
'Send the finish event to the client. Set to false if you are using additional streamText calls that send additional data. Default to true.',
},
{
name: 'experimental_sendStart',
type: 'boolean',
isOptional: true,
description:
'Send the message start event to the client. Set to false if you are using additional streamText calls and the message start event has already been sent. Default to true.',
},
],
},
],
Expand Down Expand Up @@ -2412,6 +2454,20 @@ To see `streamText` in action, check out [these examples](#examples).
description:
'Whether to send the sources information in the stream. Defaults to false.',
},
{
name: 'experimental_sendFinish',
type: 'boolean',
isOptional: true,
description:
'Send the finish event to the client. Set to false if you are using additional streamText calls that send additional data. Default to true.',
},
{
name: 'experimental_sendStart',
type: 'boolean',
isOptional: true,
description:
'Send the message start event to the client. Set to false if you are using additional streamText calls and the message start event has already been sent. Default to true.',
},
],
},
],
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { openai } from '@ai-sdk/openai';
import { createDataStreamResponse, streamText, tool } from 'ai';
import { z } from 'zod';

export async function POST(req: Request) {
const { messages } = await req.json();

return createDataStreamResponse({
execute: async dataStream => {
// step 1 example: forced tool call
const result1 = streamText({
model: openai('gpt-4o-mini', { structuredOutputs: true }),
system: 'Extract the user goal from the conversation.',
messages,
toolChoice: 'required', // force the model to call a tool
tools: {
extractGoal: tool({
parameters: z.object({ goal: z.string() }),
execute: async ({ goal }) => goal, // no-op extract tool
}),
},
});

// forward the initial result to the client without the finish event:
result1.mergeIntoDataStream(dataStream, {
experimental_sendFinish: false, // omit the finish event
});

// note: you can use any programming construct here, e.g. if-else, loops, etc.
// workflow programming is normal programming with this approach.

// example: continue stream with forced tool call from previous step
const result2 = streamText({
// different system prompt, different model, no tools:
model: openai('gpt-4o'),
system:
'You are a helpful assistant with a different system prompt. Repeat the extract user goal in your answer.',
// continue the workflow stream with the messages from the previous step:
messages: [...messages, ...(await result1.response).messages],
});

// forward the 2nd result to the client (incl. the finish event):
result2.mergeIntoDataStream(dataStream, {
experimental_sendStart: false, // omit the start event
});
},
});
}
61 changes: 61 additions & 0 deletions examples/next-openai/app/use-chat-streamdata-multistep/page.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, data, setData } =
useChat({ api: '/api/use-chat-streamdata-multistep' });

return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{data && (
<>
<pre className="p-4 text-sm bg-gray-100">
{JSON.stringify(data, null, 2)}
</pre>
<button
onClick={() => setData(undefined)}
className="px-4 py-2 mt-2 text-white bg-blue-500 rounded hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-opacity-50"
>
Clear Data
</button>
</>
)}

{messages?.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
<strong>{`${message.role}: `}</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-invocation': {
return (
<pre key={index}>
{JSON.stringify(part.toolInvocation, null, 2)}
</pre>
);
}
}
})}
<br />
<br />
</div>
))}

<form
onSubmit={e => {
setData(undefined); // clear stream data
handleSubmit(e);
}}
>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
Original file line number Diff line number Diff line change
Expand Up @@ -3781,6 +3781,17 @@ exports[`streamText > result.pipeDataStreamToResponse > should mask error messag
]
`;

exports[`streamText > result.pipeDataStreamToResponse > should omit message finish event (d:) when sendFinish is false 1`] = `
[
"f:{"messageId":"msg-0"}
",
"0:"Hello, World!"
",
"e:{"finishReason":"stop","usage":{"promptTokens":3,"completionTokens":10},"isContinued":false}
",
]
`;

exports[`streamText > result.pipeDataStreamToResponse > should support custom error messages 1`] = `
[
"f:{"messageId":"msg-0"}
Expand Down Expand Up @@ -4228,6 +4239,17 @@ exports[`streamText > result.toDataStream > should mask error messages by defaul
]
`;

exports[`streamText > result.toDataStream > should omit message finish event (d:) when sendFinish is false 1`] = `
[
"f:{"messageId":"msg-0"}
",
"0:"Hello, World!"
",
"e:{"finishReason":"stop","usage":{"promptTokens":3,"completionTokens":10},"isContinued":false}
",
]
`;

exports[`streamText > result.toDataStream > should send reasoning content when sendReasoning is true 1`] = `
[
"f:{"messageId":"msg-0"}
Expand Down
23 changes: 22 additions & 1 deletion packages/ai/core/generate-text/stream-text-result.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ import { Source } from '../types/language-model';
import { LanguageModelResponseMetadata } from '../types/language-model-response-metadata';
import { LanguageModelUsage } from '../types/usage';
import { AsyncIterableStream } from '../util/async-iterable-stream';
import { ReasoningDetail } from './reasoning-detail';
import { StepResult } from './step-result';
import { ToolCallUnion } from './tool-call';
import { ToolResultUnion } from './tool-result';
import { ToolSet } from './tool-set';
import { ReasoningDetail } from './reasoning-detail';

export type DataStreamOptions = {
/**
Expand All @@ -38,6 +38,27 @@ export type DataStreamOptions = {
* Default to false.
*/
sendSources?: boolean;

/**
* Send the finish event to the client.
* Set to false if you are using additional streamText calls
* that send additional data.
* Default to true.
*/
experimental_sendFinish?: boolean;

/**
* Send the message start event to the client.
* Set to false if you are using additional streamText calls
* and the message start event has already been sent.
* Default to true.
*
* Note: this setting is currently not used, but you should
* already set it to false if you are using additional
* streamText calls that send additional data to prevent
* the message start event from being sent multiple times.
*/
experimental_sendStart?: boolean;
};

/**
Expand Down
Loading

0 comments on commit 0cb2647

Please sign in to comment.