Skip to content

Commit 9cda9fa

Browse files
committed
Translator and StructuredResponse modules now default to GPT4 model
Changed 'ChatGPTTranslatorService' and 'OpenAiClientExtensions.GetStructuredResponse' to use GPT4 model by default. This is in response to GPT4 providing more stable responses compared to the previous models. The 'README.md' file was also updated to reflect these changes. Associated code specific to 'gpt-3.5-turbo-1106' model has been removed from 'OpenAiClient_GetStructuredResponse.cs' test cases as it's now redundant. The change aims to improve the translation quality and stability of responses in the application.
1 parent 743dd83 commit 9cda9fa

File tree

4 files changed

+23
-11
lines changed

4 files changed

+23
-11
lines changed

README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
[![example gif...](assets/chatgpt_console_spectre_example.gif)](samples/ChatGpt.SpectreConsoleExample/Program.cs)
2-
# ChatGPT integration for .NET
2+
# ChatGPT integration for .NET (+DI)
33
[![Nuget](https://img.shields.io/nuget/v/OpenAI.ChatGPT.EntityFrameworkCore)](https://www.nuget.org/packages/OpenAI.ChatGPT.EntityFrameworkCore/)[![.NET](https://github.com/rodion-m/ChatGPT_API_dotnet/actions/workflows/dotnet.yml/badge.svg)](https://github.com/rodion-m/ChatGPT_API_dotnet/actions/workflows/dotnet.yml) \
44
OpenAI Chat Completions API (ChatGPT) integration with DI and EF Core supporting. It allows you to use the API in your .NET applications. Also, the client supports streaming responses (like ChatGPT) via async streams.
55

6-
[NEW!] `StructuredResponse` module allows you to get structured responses from the API as C# object. See: [StructuredResponse](#structuredresponse) section.
6+
## 2023.11 UPD: GPT4Turbo and JSON mode support
7+
`StructuredResponse` module allows you to get structured responses from the API as C# object. See: [StructuredResponse](#structuredresponse) section.
78

89
## Content
910
<!-- TOC -->
@@ -125,6 +126,8 @@ var message = Dialog
125126
City almaty = await _client.GetStructuredResponse<City>(message);
126127
Console.WriteLine(almaty); // Name: "Almaty", Country: "Kazakhstan", YearOfFoundation: 1854
127128
```
129+
Under the hood, it uses the new [json mode](https://platform.openai.com/docs/guides/text-generation/json-mode) of the API for GPT4Turbo and for the `gpt-3.5-turbo-1106`. Regular GPT4 and GPT3.5Turbo models are also supported, but GPT3.5 responses may be unstable (for GPT3.5 it's strictly recommended to provide `examples` parameter).
130+
128131
More complex examples with arrays, nested objects and enums are available in tests: https://github.com/rodion-m/ChatGPT_API_dotnet/blob/f50d386f0b65a4ba8c1041a28bab2a1a475c2296/tests/OpenAI.ChatGpt.IntegrationTests/OpenAiClientTests/OpenAiClient_GetStructuredResponse.cs#L1
129132

130133
NuGet: https://www.nuget.org/packages/OpenAI.ChatGPT.Modules.StructuredResponse

src/modules/OpenAI.ChatGpt.Modules.StructuredResponse/OpenAiClientExtensions.GetStructuredResponse.cs

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,14 +35,14 @@ public static class OpenAiClientExtensions
3535
/// <param name="client">The OpenAI client.</param>
3636
/// <param name="dialog">The chat dialog, including a user message and any system messages that set the behavior of the assistant.</param>
3737
/// <param name="maxTokens">Optional. The maximum number of tokens in the response. Defaults to the limit of the model, minus the number of input tokens, minus 500.</param>
38-
/// <param name="model">Optional. The name of the model to use. Defaults to "text-davinci-002" unless the message input is longer than 6000 tokens, in which case it defaults to "text-davinci-003".</param>
38+
/// <param name="model">Optional. The name of the model to use. Defaults to <see cref="ChatCompletionModels.Gpt4"/>. It's recommended to use GPT4+.</param>
3939
/// <param name="temperature">Controls the randomness of the assistant’s output. Ranges from 0.0 to 1.0, where 0.0 is deterministic and 1.0 is highly random. Default value is the default for the OpenAI API.</param>
40-
/// <param name="user">Optional. The user who is having the conversation. If not specified, defaults to "system".</param>
40+
/// <param name="user">Optional. The user ID who is having the conversation.</param>
4141
/// <param name="requestModifier">Optional. A function that can modify the chat completion request before it is sent to the API.</param>
4242
/// <param name="rawResponseGetter">Optional. A function that can access the raw API response.</param>
4343
/// <param name="jsonDeserializerOptions">Optional. Custom JSON deserializer options for the deserialization. If not specified, default options with case insensitive property names are used.</param>
4444
/// <param name="jsonSerializerOptions">Optional. Custom JSON serializer options for the serialization.</param>
45-
/// <param name="examples">Optional. Example of the models those will be serialized using <paramref name="jsonSerializerOptions"/></param>
45+
/// <param name="examples">Optional. Example of the models those will be serialized using <paramref name="jsonSerializerOptions"/>.</param>
4646
/// <param name="cancellationToken">Optional. A cancellation token that can be used to cancel the operation.</param>
4747
/// <returns>
4848
/// A task that represents the asynchronous operation. The task result contains the deserialized object from the API response.
@@ -114,7 +114,13 @@ internal static async Task<TObject> GetStructuredResponse<TObject>(
114114
{
115115
editMsg.Content += GetAdditionalJsonResponsePrompt(responseFormat, examples, jsonSerializerOptions);
116116

117-
(model, maxTokens) = FindOptimalModelAndMaxToken(dialog.GetMessages(), model, maxTokens);
117+
(model, maxTokens) = FindOptimalModelAndMaxToken(
118+
dialog.GetMessages(),
119+
model,
120+
maxTokens,
121+
smallModel: ChatCompletionModels.Gpt4,
122+
bigModel: ChatCompletionModels.Gpt4
123+
);
118124

119125
var response = await client.GetChatCompletions(
120126
dialog,

src/modules/OpenAI.ChatGpt.Modules.Translator/ChatGPTTranslatorService.cs

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ internal virtual string CreateTextTranslationPrompt(string sourceLanguage, strin
103103
"In the response write ONLY translated text." +
104104
(_extraPrompt is not null ? "\n" + _extraPrompt : "");
105105
}
106-
106+
107107
public virtual async Task<TObject> TranslateObject<TObject>(
108108
TObject objectToTranslate,
109109
bool isBatch = false,
@@ -140,7 +140,13 @@ public virtual async Task<TObject> TranslateObject<TObject>(
140140
var objectJson = JsonSerializer.Serialize(objectToTranslate, jsonSerializerOptions);
141141
var dialog = Dialog.StartAsSystem(prompt).ThenUser(objectJson);
142142
var messages = dialog.GetMessages().ToArray();
143-
(model, maxTokens) = ChatCompletionMessage.FindOptimalModelAndMaxToken(messages, model, maxTokens);
143+
(model, maxTokens) = ChatCompletionMessage.FindOptimalModelAndMaxToken(
144+
messages,
145+
model,
146+
maxTokens,
147+
smallModel: ChatCompletionModels.Gpt4,
148+
bigModel: ChatCompletionModels.Gpt4
149+
);
144150
var response = await _client.GetStructuredResponse<TObject>(
145151
dialog,
146152
maxTokens.Value,

tests/OpenAI.ChatGpt.IntegrationTests/OpenAiClientTests/OpenAiClient_GetStructuredResponse.cs

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@ public async void Get_simple_structured_response_from_ChatGPT(string model)
2727
[Theory]
2828
[InlineData(ChatCompletionModels.Gpt4Turbo)]
2929
[InlineData(ChatCompletionModels.Gpt4)]
30-
[InlineData(ChatCompletionModels.Gpt3_5_Turbo_1106)]
3130
public async void Get_structured_response_with_ARRAY_from_ChatGPT(string model)
3231
{
3332
var message = Dialog
@@ -51,7 +50,6 @@ public async void Get_structured_response_with_ARRAY_from_ChatGPT(string model)
5150
[Theory]
5251
[InlineData(ChatCompletionModels.Gpt4Turbo)]
5352
[InlineData(ChatCompletionModels.Gpt4)]
54-
[InlineData(ChatCompletionModels.Gpt3_5_Turbo_1106)]
5553
public async void Get_structured_response_with_ENUM_from_ChatGPT(string model)
5654
{
5755
var message = Dialog
@@ -65,7 +63,6 @@ public async void Get_structured_response_with_ENUM_from_ChatGPT(string model)
6563
[Theory]
6664
[InlineData(ChatCompletionModels.Gpt4Turbo)]
6765
[InlineData(ChatCompletionModels.Gpt4)]
68-
[InlineData(ChatCompletionModels.Gpt3_5_Turbo_1106)]
6966
public async void Get_structured_response_with_extra_data_from_ChatGPT(string model)
7067
{
7168
var message = Dialog

0 commit comments

Comments
 (0)