Skip to content
This repository has been archived by the owner on Jun 2, 2024. It is now read-only.

Commit

Permalink
Function Calling and pricing changes (#24)
Browse files Browse the repository at this point in the history
  • Loading branch information
maxijonson committed Jun 18, 2023
1 parent 5d455ca commit 3d67fd5
Show file tree
Hide file tree
Showing 19 changed files with 758 additions and 56 deletions.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@ GPT Turbo is a JavaScript library for seamless integration with OpenAI's Chat Co

## Features

> ✨ New (June 2023): Added support for **Function calling**
🤖 Supports all Chat Completion models, including **GPT-4**. (full list [here](https://platform.openai.com/docs/models/model-endpoint-compatibility))

💬 Supports both single and streamed completions, just like ChatGPT.
💬 Supports both single, streamed and function completions, just like ChatGPT.

⚙ Tune chat completion parameters, such as temperature, top-p, and frequency penalty.

Expand Down
4 changes: 4 additions & 0 deletions packages/cli/src/components/Message.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,13 @@ import React from "react";
export const SENDER_USER = "You";
export const SENDER_ASSISTANT = "GPT";
export const SENDER_SYSTEM = "SYS";
export const SENDER_FUNCTION = "FUN";
export const SENDER_SUFFIX = ": ";
export const SENDER_WIDTH = [
SENDER_USER,
SENDER_ASSISTANT,
SENDER_SYSTEM,
SENDER_FUNCTION,
].reduce(
(max, sender) => Math.max(max, sender.length + SENDER_SUFFIX.length),
0
Expand All @@ -28,6 +30,8 @@ export default ({ message }: MessageProps) => {
return SENDER_ASSISTANT;
case "system":
return SENDER_SYSTEM;
case "function":
return SENDER_FUNCTION;
case "user":
default:
return SENDER_USER;
Expand Down
20 changes: 18 additions & 2 deletions packages/cli/src/hooks/usePagedMessages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,31 @@ export default (messages: Message[], maxWidth: number, maxHeight: number) => {
const msgs = messages.slice();
for (let i = 0; i < msgs.length; i++) {
const message = msgs[i];
const messageHeight = getMessageHeight(message.content, maxWidth);
const messageContent = (() => {
if (message.isCompletion()) {
return message.content;
}
if (message.isFunction()) {
return `${message.name}() => ${message.content}`;
}
if (message.isFunctionCall()) {
const { name, arguments: args } = message.functionCall;
const parameters = Object.entries(args)
.map(([param, value]) => `${param}=${value}`)
.join(", ");
return `${name}(${parameters})`;
}
return "[Unknown message type]";
})();
const messageHeight = getMessageHeight(messageContent, maxWidth);
const isHuge = messageHeight > maxHeight;
const isOverflowing = pageHeight + messageHeight > maxHeight;

// FIXME: Not yet perfect. May overflow temporarily until the window is resized or a message is added.
if (isHuge) {
const remainingHeight = maxHeight - pageHeight;
const [firstMessageContent, secondMessageContent] =
splitMessage(message.content, maxWidth, remainingHeight);
splitMessage(messageContent, maxWidth, remainingHeight);

if (firstMessageContent.length && secondMessageContent.length) {
msgs[i] = new Message(
Expand Down
3 changes: 3 additions & 0 deletions packages/discord/src/managers/ConversationManager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ export default class ConversationManager {
max_tokens: maxTokens,
});

// Should never happen, since we're not using functions. But this check provides type guards.
if (!response.isCompletion()) throw new Error("Not a completion");

try {
await conversation.addAssistantMessage(response.content);
} finally {
Expand Down
109 changes: 104 additions & 5 deletions packages/lib/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,13 @@ const conversation = new Conversation({
});

const response = await conversation.prompt("How can I make my code more efficient than a droid army?");
process.stdout.write(`Response: `);
const unsubscribe = response.onMessageUpdate((content) => {
process.stdout.write(content);
const unsubscribeUpdate = response.onMessageUpdate((content) => {
console.log(content);
});

response.onStreamingStop(() => {
unsubscribe();
const unsubscribeStop = response.onStreamingStop(() => {
unsubscribeUpdate();
unsubscribeStop();
});
```

Expand Down Expand Up @@ -147,6 +147,105 @@ const second = await conversation.prompt("Take a seat, young Skywalker."); // "I
const edit = await conversation.reprompt(first, "We grant you the rank of Master.");
```

### Function Calling

> ⚠ Function calling is relatively new and the implementation in this library may change as more is discovered about it.
>
> Limitations (of the GPT Turbo library) with function calling:
> - Token count is not currently calculated for assistant function calls and context. This means the cost of function calls are not taken into account at the moment. This will be fixed in a future release, as I learn more about how function call tokens are calculated by OpenAI.
> - Function calls are not currently supported in dry mode. There is no planned support for this in the near future.
> - While this feature is typed, it may not be as strongly typed as you'd expect. In other words, there's no strict type checking against the function name and arguments against the definition you gave to the configuration's `functions` property. This may or may not be improved in the future, depending on how relevant strong typing is for this feature without sacrificing usability.
You can use OpenAI's Function Calling feature with GPT Turbo through the `functionPrompt` method. Just define your functions in the conversation configuration (or during prompting) just like you would normally with the Chat Completion API.

⚠ Unless you configure `functions_call` to explicitly call a function by name (which by default does not, it uses `auto`), make sure you also plan for standard chat completions in your code. To help with detecting which type of response you got, the `Message` class exposes two (type-guarded!) functions: `isFunctionCall` and `isCompletion`.

> At the time of writing, Function Calling is not supported on the latest version of the GPT model. In this example, we'll use the `gpt-3.5-turbo-0613` model, but the standard `gpt-3.5-turbo` model might work at the time you're reading this.
```ts
const locateJedi = (jedi, locationType = "planet") => {
return {
name: jedi,
location: locationType === "planet" ? "Tatooine" : "Mos Eisley",
};
};

const conversation = new Conversation({
apiKey: /** Your API key */,
model: "gpt-3.5-turbo-0613",
functions: [
{
name: "locateJedi",
description: "Returns the current location of a Jedi",
parameters: {
type: "object",
properties: {
jedi: {
type: "string",
description: "The name of the Jedi to locate",
},
locationType: {
type: "string",
enum: ["planet", "city"],
},
},
required: ["jedi"],
},
},
],
});

const r1 = await conversation.prompt("Where can I find Obi-Wan Kenobi?");

if (r1.isCompletion()) {
console.info(r1.content);
} else if (r1.isFunctionCall()) {
const { jedi, locationType } = r1.functionCall.arguments;
const r2 = await conversation.functionPrompt(
r1.functionCall.name,
locateJedi(jedi, locationType)
);
console.info(r2.content); // "Obi-Wan Kenobi can be found on Tatooine."
}
```

For streamed completions and function calls, it gets a bit more complicated, but still supported! Hopefully, a better flow will be implemented in the future.

```ts
const conversation = new Conversation({ /* ... */, stream: true });

const r1 = await conversation.prompt("In which city is Obi-Wan Kenobi?");

const unsubscribeUpdates = r1.onMessageUpdate((_, message) => {
if (!message.isCompletion()) {
return;
}
console.info(message.content);
});

const unsubscribeStop = r1.onMessageStreamingStop(async (message) => {
if (message.isFunctionCall()) {
const { jedi, locationType } = message.functionCall.arguments;
const r2 = await conversation.functionPrompt(
message.functionCall.name,
locateJedi(jedi, locationType)
);

const unsubscribeFunctionUpdate = r2.onMessageUpdate((content) => {
console.info(content); // "Obi-Wan Kenobi is located in the city of Mos Eisley."
});

const unsubscribeFunctionStop = r2.onMessageStreamingStop(() => {
unsubscribeFunctionUpdate();
unsubscribeFunctionStop();
});
}

unsubscribeUpdates();
unsubscribeStop();
});
```

## Documentation

View the full documentation [here](https://gpt-turbo.chintristan.io/). The documentation website is auto-generated based on the TSdoc comments in the source code for the latest version of the library.
1 change: 0 additions & 1 deletion packages/lib/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
"lint:strict": "npm run lint -- --max-warnings 0",
"lint:fix": "npm run lint -- --fix",
"build": "npm run lint:strict && rimraf dist && tsc -p tsconfig.build.json && copyfiles -u 1 -e \"src/**/*.ts\" \"src/**/*\" dist",
"sandbox": "ts-node-esm sandbox/index.ts",
"docs": "typedoc"
},
"keywords": [
Expand Down
Loading

0 comments on commit 3d67fd5

Please sign in to comment.