You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to provide feedback in rich user interfaces to the user, it is often desirable to receive lifecycle feedbacks from the LLM on tool use. We propose to enrich the tool interface to provide callbacks before and after a tool is being called. Today, the Vercel AI SDK exposes the onStepFinish callback after a tool has been invoked. It would be beneficial for some applications to use a onStepInit or beforeStep callback to update its user interface or take actions before each tool execution. Sure, we can call the execute function to know when a tool is being invoked, but depending on the decoupling in the code between the tools themselves and the main logic of the application, this might not be desirable as it introduces strong coupling between both.
Furthermore, I'd like to challenge whether these callbacks should be exposed at the generation level (generateText, generateObject), or also at the tool level.
Use Cases
User-interface updates (e.g being able to give feedback to the user about the detail of the LLM tool call).
Being able to decouple lifecycle logic from the tool implementation and the application.
Additional context
For context, I use the following helper today as an example. The final implementation should not necessarily use the same logic, I'm adding my implementation for the sake of the example only.
import{Tool}from'ai';exporttypeToolDefinition=Tool&{onBeforeExecute?: ()=>void;onAfterExecute?: ()=>void;setOnBeforeExecute: (fn: ()=>void)=>ToolDefinition;setOnAfterExecute: (fn: ()=>void)=>ToolDefinition;};/** * The `tool` function makes it possible to define a tool using the Vercel AI SDK as follows: * * ```ts * export const myTool = tool({ * description: 'My tool description', * parameters: MyToolInputSchema, * execute: async ({ message }, opts) => { * // Your tool logic here * } * }); * * Below we patch the `tool` function to add lifecycle hooks to the `execute` function. */exportconstwithLifecycleHooks=(toolDefinition: Tool): ToolDefinition=>{constoriginalExecute=toolDefinition.execute;constanyDefinition=toolDefinitionasany;// Add lifecycle hooks to the tool definition executing// before the `execute` function.anyDefinition.setOnBeforeExecute=(fn: ToolDefinition['onBeforeExecute'])=>{anyDefinition.onBeforeExecute=fn;return(toolDefinition);};// Add lifecycle hooks to the tool definition executing// after the `execute` function.anyDefinition.setOnAfterExecute=(fn: ToolDefinition['onAfterExecute'])=>{anyDefinition.onAfterExecute=fn;return(toolDefinition);};if(originalExecute){toolDefinition.execute=asyncfunction(parameters,options){if(anyDefinition.onBeforeExecute){anyDefinition.onBeforeExecute();}constresult=awaitoriginalExecute(parameters,options);if(anyDefinition.onAfterExecute){anyDefinition.onAfterExecute();}return(result);};}return(toolDefinitionasToolDefinition);};
The text was updated successfully, but these errors were encountered:
Feature Description
In order to provide feedback in rich user interfaces to the user, it is often desirable to receive lifecycle feedbacks from the LLM on tool use. We propose to enrich the
tool
interface to provide callbacks before and after a tool is being called. Today, the Vercel AI SDK exposes theonStepFinish
callback after a tool has been invoked. It would be beneficial for some applications to use aonStepInit
orbeforeStep
callback to update its user interface or take actions before each tool execution. Sure, we can call theexecute
function to know when a tool is being invoked, but depending on the decoupling in the code between thetools
themselves and the main logic of the application, this might not be desirable as it introduces strong coupling between both.Furthermore, I'd like to challenge whether these callbacks should be exposed at the generation level (
generateText
,generateObject
), or also at thetool
level.Use Cases
tool
implementation and the application.Additional context
For context, I use the following helper today as an example. The final implementation should not necessarily use the same logic, I'm adding my implementation for the sake of the example only.
The text was updated successfully, but these errors were encountered: