⚠️ Most of this functionality has now been built into OpenAI's official libraries, so this library will not be maintained or expanded further.
Lightweight utils that make it easier to build agents with GPT.
The goal of this project is to provide simple tools that give you total flexibility to implement your own agent, rather than forcing you into an opinionated framework.
See examples/agent.ts
for a full example of an agent implemented in <50 lines of code.
This project doesn't have any official releases yet. If you want to try it, you have two options:
- Build from source
pnpm run build
- Install from repo
npm i git+https://github.com/MaxMusing/gpt-agent-utils.git
Tools are functions that you can define that give GPT the ability to take certain actions beyond generating text. These can be used for fetching or mutating data from external systems, or for performing more complex computations.
You can define a Tool
using a Zod schema and a callback function. The callback can optionally be async.
See examples/tools/getCurrentWeather.ts
for a full example.
import { type Tool } from "gpt-agent-utils";
const schema = z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
});
function callback({ location, unit = "fahrenheit" }: z.infer<typeof schema>) {
const weatherInfo = {
location: location,
temperature: "72",
unit: unit,
forecast: ["sunny", "windy"],
};
return JSON.stringify(weatherInfo);
}
const tool: Tool = {
name: "get_current_weather",
description: "Get the current weather in a given location",
schema,
callback,
};
You can use generateFunctions
to convert tools into the format OpenAI expects, then use handleFunctionCall
to parse the response from GPT and run the appropriate callback.
import { generateFunctions, handleFunctionCall } from "gpt-agent-utils";
const tools = [getCurrentWeather];
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
functions: generateFunctions(tools),
});
if (response.choices[0].message.function_call) {
await handleFunctionCall({
functionCall: response.choices[0].message.function_call,
tools,
});
}
GPT has a limited context window, so you need to selectively limit what context gets passed for each call.
You can use truncateMessages
to select the most recent messages that fit within a provided token limit.
import { generateFunctions, truncateMessages } from "gpt-agent-utils";
const tools = [getCurrentWeather];
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: truncateMessages({ messages, tools, tokenLimit: 1000 }),
functions: generateFunctions(tools),
});