Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented Edits Endpoint #143

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
116 changes: 116 additions & 0 deletions OpenAI_API/Edit/EditEndpoint.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
using OpenAI_API.Models;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;

namespace OpenAI_API.Edits
{
/// <summary>
/// This API lets you edit the prompt. Given a prompt and instruction, this will return an edited version of the prompt. This API lets you edit the prompt. Given a prompt and instruction, this will return an edited version of the prompt. <see href="https://platform.openai.com/docs/api-reference/edits"/>
/// </summary>
public class EditEndpoint : EndpointBase, IEditEndpoint
{
/// <summary>
/// This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value.
/// </summary>
public EditRequest DefaultEditRequestArgs { get; set; } = new EditRequest() { Model = Model.TextDavinciEdit };

/// <summary>
/// The name of the endpoint, which is the final path segment in the API URL. For example, "edits".
/// </summary>
protected override string Endpoint { get { return "edits"; } }

/// <summary>
/// Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of <see cref="OpenAIAPI"/> as <see cref="OpenAIAPI.Edit"/>.
/// </summary>
/// <param name="api"></param>
internal EditEndpoint(OpenAIAPI api) : base(api) { }

/// <summary>
/// Ask the API to edit the prompt using the specified request. This is non-streaming, so it will wait until the API returns the full result.
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <returns>Asynchronously returns the edits result. Look in its <see cref="EditResult.Choices"/> property for the edits.</returns>
public async Task<EditResult> CreateEditsAsync(EditRequest request)
{
if(request.Model != Model.TextDavinciEdit.ModelID && request.Model != Model.CodeDavinciEdit.ModelID)
throw new ArgumentException($"Model must be either '{Model.TextDavinciEdit.ModelID}' or '{Model.CodeDavinciEdit.ModelID}'. For more details, refer https://platform.openai.com/docs/api-reference/edits");
return await HttpPost<EditResult>(postData: request);
}

/// <summary>
/// Ask the API to edit the prompt using the specified request and a requested number of outputs. This is non-streaming, so it will wait until the API returns the full result.
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <param name="numOutputs">Overrides <see cref="EditRequest.NumChoicesPerPrompt"/> as a convenience.</param>
/// <returns>Asynchronously returns the edits result. Look in its <see cref="EditResult.Choices"/> property for the edits, which should have a length equal to <paramref name="numOutputs"/>.</returns>
public Task<EditResult> CreateEditsAsync(EditRequest request, int numOutputs = 5)
{
request.NumChoicesPerPrompt = numOutputs;
return CreateEditsAsync(request);
}

/// <summary>
/// Ask the API to edit the prompt. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in <see cref="DefaultEditRequestArgs"/> if present.
/// </summary>
/// <param name="prompt">The input text to use as a starting point for the edit. Defaults to an empty string.</param>
/// <param name="instruction">The instruction that tells the model how to edit the prompt. (Required)</param>
/// <param name="model">ID of the model to use. You can use <see cref="Model.TextDavinciEdit"/> or <see cref="Model.CodeDavinciEdit"/> for edit endpoint.</param>
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <see cref="TopP"/> but not both.</param>
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <see cref="Temperature"/> but not both.</param>
/// <param name="numOutputs">How many edits to generate for the input and instruction.</param>
/// <returns></returns>
public Task<EditResult> CreateEditsAsync(string prompt,
string instruction,
Model model = null,
double? temperature = null,
double? top_p = null,
int? numOutputs = null
)
{
EditRequest request = new EditRequest(DefaultEditRequestArgs)
{
Input = prompt,
Model = model ?? DefaultEditRequestArgs.Model,
Instruction = string.IsNullOrEmpty(instruction) ? DefaultEditRequestArgs.Instruction : instruction,
Temperature = temperature ?? DefaultEditRequestArgs.Temperature,
TopP = top_p ?? DefaultEditRequestArgs.TopP,
NumChoicesPerPrompt = numOutputs ?? DefaultEditRequestArgs.NumChoicesPerPrompt,
};
return CreateEditsAsync(request);
}


/// <summary>
/// Simply returns edited prompt string
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <returns>The best edited result </returns>
public async Task<string> CreateAndFormatEdits(EditRequest request)
{
string prompt = request.Input;
var result = await CreateEditsAsync(request);
return result.ToString();
}

/// <summary>
/// Simply returns the best edit
/// </summary>
/// <param name="prompt">The input prompt to be edited</param>
/// <param name="instruction">The instruction that tells model how to edit the prompt</param>
/// <returns>The best edited result</returns>
public async Task<string> GetEdits(string prompt, string instruction)
{
EditRequest request = new EditRequest(DefaultEditRequestArgs)
{
Input = prompt,
Instruction = instruction,
NumChoicesPerPrompt = 1
};
var result = await CreateEditsAsync(request);
return result.ToString();
}

}
}
102 changes: 102 additions & 0 deletions OpenAI_API/Edit/EditRequest.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
using Newtonsoft.Json;
using OpenAI_API.Completions;
using OpenAI_API.Models;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace OpenAI_API.Edits
{
/// <summary>
/// Represents a request to the Edit API. Mostly matches the parameters in <see href="https://platform.openai.com/docs/api-reference/edits">the OpenAI docs</see>, although some might have been renamed for ease of use.
/// </summary>
public class EditRequest
{
/// <summary>
/// ID of the model to use. You can use <see cref="Model.TextDavinciEdit"/> or <see cref="Model.CodeDavinciEdit"/> for edit endpoint.
/// </summary>
[JsonProperty("model")]
public string Model { get; set; } = OpenAI_API.Models.Model.TextDavinciEdit;

/// <summary>
/// The input text to use as a starting point for the edit. Defaults to an empty string.
/// </summary>
[JsonProperty("input")]
public string Input { get; set; }

/// <summary>
/// The instruction that tells the model how to edit the prompt. (Required)
/// </summary>
[JsonProperty("instruction")]
public string Instruction { get; set; }

/// <summary>
/// How many edits to generate for the input and instruction.
/// </summary>
[JsonProperty("n")]
public int? NumChoicesPerPrompt { get; set; }

/// <summary>
/// What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <see cref="TopP"/> but not both.
/// </summary>
[JsonProperty("temperature")]
public double? Temperature { get; set; }

/// <summary>
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <see cref="Temperature"/> but not both.
/// </summary>
[JsonProperty("top_p")]
public double? TopP { get; set; }


/// <summary>
/// Cretes a new, empty <see cref="CompletionRequest"/>
/// </summary>
public EditRequest()
{
this.Model = OpenAI_API.Models.Model.TextDavinciEdit;
}

/// <summary>
/// Creates a new <see cref="EditRequest"/>, inheriting any parameters set in <paramref name="basedOn"/>.
/// </summary>
/// <param name="basedOn">The <see cref="CompletionRequest"/> to copy</param>
public EditRequest(EditRequest basedOn)
{
this.Model = basedOn.Model;
this.Input = basedOn.Input;
this.Instruction = basedOn.Instruction;
this.Temperature = basedOn.Temperature;
this.TopP = basedOn.TopP;
this.NumChoicesPerPrompt = basedOn.NumChoicesPerPrompt;
}


/// <summary>
/// Creates a new <see cref="CompletionRequest"/> with the specified parameters
/// </summary>
/// <param name="input">The input text to use as a starting point for the edit. Defaults to an empty string.</param>
/// <param name="instruction">The instruction that tells the model how to edit the prompt. (Required)</param>
/// <param name="model">ID of the model to use. You can use <see cref="Model.TextDavinciEdit"/> or <see cref="Model.CodeDavinciEdit"/> for edit endpoint.</param>
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <see cref="TopP"/> but not both.</param>
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <see cref="Temperature"/> but not both.</param>
/// <param name="numOutputs">How many edits to generate for the input and instruction.</param>
public EditRequest(
string input,
string instruction,
Model model = null,
double? temperature = null,
double? top_p = null,
int? numOutputs = null)
{
this.Model = model;
this.Input = input;
this.Instruction = instruction;
this.Temperature = temperature;
this.TopP = top_p;
this.NumChoicesPerPrompt = numOutputs;

}
}
}
77 changes: 77 additions & 0 deletions OpenAI_API/Edit/EditResult.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
using Newtonsoft.Json;
using OpenAI_API.Chat;
using System;
using System.Collections.Generic;
using System.Text;

namespace OpenAI_API.Edits
{
/// <summary>
/// Represents a result from calling the Edit API
/// </summary>
public class EditResult : ApiResultBase
{
/// <summary>
/// The list of choices that the user was presented with during the edit interation
/// </summary>
[JsonProperty("choices")]
public IReadOnlyList<EditChoice> Choices { get; set; }

/// <summary>
/// The usage statistics for the edit call
/// </summary>
[JsonProperty("usage")]
public EditUsage Usage { get; set; }

/// <summary>
/// A convenience method to return the content of the message in the first choice of this response
/// </summary>
/// <returns>The edited text returned by the API as reponse.</returns>
public override string ToString()
{
if (Choices != null && Choices.Count > 0)
return Choices[0].ToString();
else
return null;
}
}

/// <summary>
/// A message received from the API, including the text and index.
/// </summary>
public class EditChoice
{
/// <summary>
/// The index of the choice in the list of choices
/// </summary>
[JsonProperty("index")]
public int Index { get; set; }

/// <summary>
/// The edited text that was presented to the user as the choice. This is returned as response from API
/// </summary>
[JsonProperty("text")]
public string Text { get; set; }

/// <summary>
/// A convenience method to return the content of the message in this response
/// </summary>
/// <returns>The edited text returned by the API as reponse.</returns>
public override string ToString()
{
return Text;
}
}

/// <summary>
/// How many tokens were used in this edit message.
/// </summary>
public class EditUsage : Usage
{
/// <summary>
/// The number of completion tokens used during the edit
/// </summary>
[JsonProperty("completion_tokens")]
public int CompletionTokens { get; set; }
}
}
71 changes: 71 additions & 0 deletions OpenAI_API/Edit/IEditEndpoint.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
using OpenAI_API.Models;
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;

namespace OpenAI_API.Edits
{
/// <summary>
/// An interface for <see cref="EditEndpoint"/>, for ease of mock testing, etc
/// </summary>
public interface IEditEndpoint
{
/// <summary>
/// This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value.
/// </summary>
EditRequest DefaultEditRequestArgs { get; set; }

/// <summary>
/// Ask the API to edit the prompt using the specified request. This is non-streaming, so it will wait until the API returns the full result.
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <returns>Asynchronously returns the edits result. Look in its <see cref="EditResult.Choices"/> property for the edits.</returns>
Task<EditResult> CreateEditsAsync(EditRequest request);

/// <summary>
/// Ask the API to edit the prompt using the specified request and a requested number of outputs. This is non-streaming, so it will wait until the API returns the full result.
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <param name="numOutputs">Overrides <see cref="EditRequest.NumChoicesPerPrompt"/> as a convenience.</param>
/// <returns>Asynchronously returns the edits result. Look in its <see cref="EditResult.Choices"/> property for the edits, which should have a length equal to <paramref name="numOutputs"/>.</returns>
Task<EditResult> CreateEditsAsync(EditRequest request, int numOutputs = 5);


/// <summary>
/// Ask the API to edit the prompt. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in <see cref="DefaultEditRequestArgs"/> if present.
/// </summary>
/// <param name="prompt">The input text to use as a starting point for the edit. Defaults to an empty string.</param>
/// <param name="instruction">The instruction that tells the model how to edit the prompt. (Required)</param>
/// <param name="model">ID of the model to use. You can use <see cref="Model.TextDavinciEdit"/> or <see cref="Model.CodeDavinciEdit"/> for edit endpoint.</param>
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <see cref="TopP"/> but not both.</param>
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <see cref="Temperature"/> but not both.</param>
/// <param name="numOutputs">How many edits to generate for the input and instruction.</param>
/// <returns></returns>
Task<EditResult> CreateEditsAsync(string prompt,
string instruction,
Model model = null,
double? temperature = null,
double? top_p = null,
int? numOutputs = null
);



/// <summary>
/// Simply returns edited prompt string
/// </summary>
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultEditRequestArgs"/>.</param>
/// <returns>The best edited result </returns>
Task<string> CreateAndFormatEdits(EditRequest request);

/// <summary>
/// Simply returns the best edit
/// </summary>
/// <param name="prompt">The input prompt to be edited</param>
/// <param name="instruction">The instruction that tells model how to edit the prompt</param>
/// <returns>The best edited result</returns>
Task<string> GetEdits(string prompt, string instruction);

}
}
Loading