Skip to content

Chat completion request streaming #271

Open
@lufzle

Description

@lufzle

Confirm this is a feature request for the Node library and not the underlying OpenAI API.

  • This is a feature request for the Node library

Describe the feature or improvement you're requesting

It would be great to have an additional overload for chat.completions.create that can be used to stream requests.

In the most basic form it could be something like:

create(
    body: ReadableStream<string>,
    options?: Core.RequestOptions,
): APIPromise<Stream<ChatCompletionChunk>>;

However, passing the completion parameters in the stream may be cumbersome – and, since the point of streaming only makes sense when having big message contexts, a better option could be to use the stream only for the messages:

create(
    body: Omit<CompletionCreateParamsStreaming, "messages">,
    messages: ReadableStream<string>,
    options?: Core.RequestOptions,
): APIPromise<Stream<ChatCompletionChunk>>;

Usage example (of second approach)

// getMessageHistory makes a call to a database and returns a `ReadableStream<string>`
// The content of the stream is a JSON array of `CreateChatCompletionRequestMessage`
const stream = await getMessageHistory();

const params = {
  model: 'gpt-3.5-turbo',
  stream: true
}

const res = await openai.chat.completions.create(params, stream);

Additional context

This feature would be particularly useful in Edge environments.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions