Skip to content

Commit

Permalink
v2 of public api
Browse files Browse the repository at this point in the history
  • Loading branch information
atinylittleshell committed Nov 7, 2023
1 parent 51cdc39 commit 095c534
Show file tree
Hide file tree
Showing 18 changed files with 364 additions and 992 deletions.
19 changes: 19 additions & 0 deletions .changeset/nasty-hairs-destroy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
'function-gpt': major
---

Revamped public API to provide only the core functionality

OpenAI has just announced their Assistants API which also allows function
calling. The previous API design of function-gpt was coupled with the chat
completion API thus won't be flexible enough for this library to work well
with the new Assistants API.

As a result, the public API of this library has been revamped to provide only
the core functionality of generating function calling schema, and executing
function calling on demand.

The previous ChatGPTSession class was removed, as it was coupled with the chat
completion API. A new class FunctionCallingProvider is introduced and can be
used instead of ChatGPTSession for defining functions to be used by function
calling.
1 change: 1 addition & 0 deletions .do_tasks
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
provide example with integration of OpenAI's node.js client
4 changes: 2 additions & 2 deletions .editorconfig
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ indent_style=space
indent_size=2
tab_width=2
end_of_line=lf
insert_final_newline=false
charset=utf-8
insert_final_newline=true
charset=utf-8
2 changes: 1 addition & 1 deletion .prettierrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@
"semi": true,
"trailingComma": "all",
"singleQuote": true,
"printWidth": 120,
"printWidth": 80,
"tabWidth": 2
}
38 changes: 14 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,21 @@
# Function-GPT

> This is a typescript library that helps handle [function calling](https://platform.openai.com/docs/guides/gpt/function-calling) with OpenAI's ChatGPT API.
> This is a typescript library that helps handle [function calling](https://platform.openai.com/docs/guides/gpt/function-calling) with OpenAI.
[![NPM](https://img.shields.io/npm/v/function-gpt.svg)](https://www.npmjs.com/package/function-gpt)
[![Build Status](https://github.com/atinylittleshell/function-gpt/actions/workflows/publish.yml/badge.svg)](https://github.com/atinylittleshell/function-gpt/actions/workflows/publish.yml)
[![codecov](https://codecov.io/gh/atinylittleshell/function-gpt/graph/badge.svg?token=1R81CX1Z14)](https://codecov.io/gh/atinylittleshell/function-gpt)
[![MIT License](https://img.shields.io/badge/license-MIT-blue)](https://github.com/atinylittleshell/function-gpt/blob/main/license)

- Leverages the official [openai](https://www.npmjs.com/package/openai) npm package for communicating with OpenAI's API
- Uses typescript decorators to provide metadata for function calling
- Automatically generate function calling JSON schema from decorated typescript functions
- Automatically parse function calling response
- Automatically call functions and send back results to OpenAI
- Automatically call functions based on name and JSON-formatted arguments
- Can be used with OpenAI's Chat Completion API as well as the Assistants API

## Example

```typescript
import { gptFunction, gptString, ChatGPTSession } from 'function-gpt';
import { gptFunction, gptString, FunctionCallingProvider } from 'function-gpt';

// Define the type of the input parameter for functions above.
class BrowseParams {
Expand All @@ -25,9 +24,9 @@ class BrowseParams {
public url!: string;
}

// Create your own class that extends ChatGPTSession.
class BrowseSession extends ChatGPTSession {
// Define functions that you want to provide to ChatGPT for function calling.
// Create your own class that extends FunctionCallingProvider.
class BrowseProvider extends FunctionCallingProvider {
// Define functions that you want to provide to OpenAI for function calling.
// Decorate each function with @gptFunction to provide necessary metadata.
// The function should accept a single parameter that is a typed object.
@gptFunction('make http request to a url and return its html content', BrowseParams)
Expand All @@ -37,22 +36,13 @@ class BrowseSession extends ChatGPTSession {
}
}

const session = new BrowseSession();
const response = await session.send('count characters in the html content of https://www.google.com.');
const provider = new BrowseProvider();

// BrowseSession will first call OpenAI's ChatGPT API with the above prompt
// along with metadata about the browse function.

// OpenAI's ChatGPT API will then return a function calling response that
// asks for making a call to the browse function.

// BrowseSession will then call the browse function with the parameters
// specified in OpenAI's function calling response, and then send back the
// result to OpenAI's ChatGPT API.

// OpenAI's ChatGPT API will then return a message that contains the
// chat response.
expect(response).toBe('There are 4096 characters in the html content of https://www.google.com/.');
const schema = await provider.getSchema();
const result = await provider.handleFunctionCall(
'browse',
JSON.stringify({ url: 'https://www.google.com' }),
);
```

## API References
Expand All @@ -71,4 +61,4 @@ pnpm add function-gpt

## Contributing

Contributions are welcome! See [CONTRIBUTING.md](./CONTRIBUTING.md) for more info.
Contributions are welcome! See [CONTRIBUTING.md](./CONTRIBUTING.md) for more info.
127 changes: 11 additions & 116 deletions doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,7 @@ function-gpt

### Classes

- [ChatGPTSession](classes/ChatGPTSession.md)

### Type Aliases

- [ChatGPTSessionOptions](README.md#chatgptsessionoptions)
- [ChatGPTFunctionCall](README.md#chatgptfunctioncall)
- [ChatGPTSessionMessage](README.md#chatgptsessionmessage)
- [ChatGPTSendMessageOptions](README.md#chatgptsendmessageoptions)
- [FunctionCallingProvider](classes/FunctionCallingProvider.md)

### Functions

Expand All @@ -26,112 +19,14 @@ function-gpt
- [gptEnum](README.md#gptenum)
- [gptArray](README.md#gptarray)

## Type Aliases

### ChatGPTSessionOptions

Ƭ **ChatGPTSessionOptions**: `Object`

Options for the ChatGPTSession constructor. Compatible with the OpenAI node client options.

**`See`**

[OpenAI Node Client](https://github.com/openai/openai-node)

#### Type declaration

| Name | Type | Description |
| :------ | :------ | :------ |
| `apiKey?` | `string` | Your API key for the OpenAI API. **`Default`** ```ts process.env["OPENAI_API_KEY"] ``` |
| `baseURL?` | `string` | Override the default base URL for the API, e.g., "https://api.example.com/v2/" |
| `systemMessage?` | `string` | A system message to send to the assistant before the user's first message. Useful for setting up the assistant's behavior. **`Default`** ```ts No system message set. ``` |
| `timeout?` | `number` | The maximum amount of time (in milliseconds) that the client should wait for a response from the server before timing out a single request. Note that request timeouts are retried by default, so in a worst-case scenario you may wait much longer than this timeout before the promise succeeds or fails. |
| `maxRetries?` | `number` | The maximum number of times that the client will retry a request in case of a temporary failure, like a network error or a 5XX error from the server. **`Default`** ```ts 2 ``` |
| `dangerouslyAllowBrowser?` | `boolean` | By default, client-side use of this library is not allowed, as it risks exposing your secret API credentials to attackers. Only set this option to `true` if you understand the risks and have appropriate mitigations in place. |

#### Defined in

[src/session.ts:71](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/session.ts#L71)

___

### ChatGPTFunctionCall

Ƭ **ChatGPTFunctionCall**: `Object`

Represents a function call requested by ChatGPT.

#### Type declaration

| Name | Type |
| :------ | :------ |
| `name` | `string` |
| `arguments` | `string` |

#### Defined in

[src/session.ts:119](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/session.ts#L119)

___

### ChatGPTSessionMessage

Ƭ **ChatGPTSessionMessage**: `Object`

Represents a message in a ChatGPT session.

#### Type declaration

| Name | Type |
| :------ | :------ |
| `role` | ``"system"`` \| ``"user"`` \| ``"assistant"`` \| ``"function"`` |
| `name?` | `string` |
| `content` | `string` \| ``null`` |
| `function_call?` | [`ChatGPTFunctionCall`](README.md#chatgptfunctioncall) |

#### Defined in

[src/session.ts:127](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/session.ts#L127)

___

### ChatGPTSendMessageOptions

Ƭ **ChatGPTSendMessageOptions**: `Object`

Options for the ChatGPTSession.send method.

**`See`**

[OpenAI Chat Completion API](https://platform.openai.com/docs/api-reference/chat/create).

#### Type declaration

| Name | Type | Description |
| :------ | :------ | :------ |
| `function_call_execute_only?` | `boolean` | Stop the session after executing the function call. Useful when you don't need to give ChatGPT the result of the function call. Defaults to `false`. |
| `model` | `string` | ID of the model to use. **`See`** [model endpoint compatibility](https://platform.openai.com/docs/models/overview) |
| `frequency_penalty?` | `number` \| ``null`` | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. **`See`** [See more information about frequency and presence penalties.](https://platform.openai.com/docs/api-reference/parameter-details) |
| `function_call?` | ``"none"`` \| ``"auto"`` \| { `name`: `string` } | Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via `{"name":\ "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. |
| `logit_bias?` | `Record`<`string`, `number`\> \| ``null`` | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
| `max_tokens?` | `number` | The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens. |
| `presence_penalty?` | `number` \| ``null`` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](https://platform.openai.com/docs/api-reference/parameter-details) |
| `stop?` | `string` \| ``null`` \| `string`[] | Up to 4 sequences where the API will stop generating further tokens. |
| `temperature?` | `number` \| ``null`` | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
| `top_p?` | `number` \| ``null`` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. |
| `user?` | `string` | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. **`See`** [Learn more](https://platform.openai.com/docs/guides/safety-best-practices). |

#### Defined in

[src/session.ts:139](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/session.ts#L139)

## Functions

### gptFunction

**gptFunction**(`description`, `inputType`): (`target`: `object`, `propertyKey`: `string`, `descriptor`: `PropertyDescriptor`) => `void`

Use this decorator on a method within a ChatGPTSession subclass to enable it for function-calling.
Use this decorator on a method within a FunctionCallingProvider subclass
to enable it for function-calling.

#### Parameters

Expand Down Expand Up @@ -164,7 +59,7 @@ Use this decorator on a method within a ChatGPTSession subclass to enable it for

#### Defined in

[src/decorators.ts:19](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L19)
[src/decorators.ts:20](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L20)

___

Expand Down Expand Up @@ -201,7 +96,7 @@ Use this decorator on a property within a custom class to include it as a parame

#### Defined in

[src/decorators.ts:53](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L53)
[src/decorators.ts:61](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L61)

___

Expand Down Expand Up @@ -237,7 +132,7 @@ Use this decorator on a string property within a custom class to include it as a

#### Defined in

[src/decorators.ts:142](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L142)
[src/decorators.ts:158](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L158)

___

Expand Down Expand Up @@ -273,7 +168,7 @@ Use this decorator on a number property within a custom class to include it as a

#### Defined in

[src/decorators.ts:152](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L152)
[src/decorators.ts:168](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L168)

___

Expand Down Expand Up @@ -309,7 +204,7 @@ Use this decorator on a boolean property within a custom class to include it as

#### Defined in

[src/decorators.ts:162](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L162)
[src/decorators.ts:178](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L178)

___

Expand Down Expand Up @@ -346,7 +241,7 @@ Use this decorator on a custom class property within a custom class to include i

#### Defined in

[src/decorators.ts:173](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L173)
[src/decorators.ts:189](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L189)

___

Expand Down Expand Up @@ -383,7 +278,7 @@ Use this decorator on a custom class property within a custom class to include i

#### Defined in

[src/decorators.ts:184](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L184)
[src/decorators.ts:204](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L204)

___

Expand Down Expand Up @@ -420,4 +315,4 @@ Use this decorator on an array of strings property within a custom class to incl

#### Defined in

[src/decorators.ts:194](https://github.com/atinylittleshell/function-gpt/blob/24758c8/src/decorators.ts#L194)
[src/decorators.ts:218](https://github.com/atinylittleshell/function-gpt/blob/51cdc39/src/decorators.ts#L218)
Loading

0 comments on commit 095c534

Please sign in to comment.