You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Translator and StructuredResponse modules now default to GPT4 model
Changed 'ChatGPTTranslatorService' and 'OpenAiClientExtensions.GetStructuredResponse' to use GPT4 model by default. This is in response to GPT4 providing more stable responses compared to the previous models. The 'README.md' file was also updated to reflect these changes.
Associated code specific to 'gpt-3.5-turbo-1106' model has been removed from 'OpenAiClient_GetStructuredResponse.cs' test cases as it's now redundant. The change aims to improve the translation quality and stability of responses in the application.
OpenAI Chat Completions API (ChatGPT) integration with DI and EF Core supporting. It allows you to use the API in your .NET applications. Also, the client supports streaming responses (like ChatGPT) via async streams.
5
5
6
-
[NEW!]`StructuredResponse` module allows you to get structured responses from the API as C# object. See: [StructuredResponse](#structuredresponse) section.
6
+
## 2023.11 UPD: GPT4Turbo and JSON mode support
7
+
`StructuredResponse` module allows you to get structured responses from the API as C# object. See: [StructuredResponse](#structuredresponse) section.
Under the hood, it uses the new [json mode](https://platform.openai.com/docs/guides/text-generation/json-mode) of the API for GPT4Turbo and for the `gpt-3.5-turbo-1106`. Regular GPT4 and GPT3.5Turbo models are also supported, but GPT3.5 responses may be unstable (for GPT3.5 it's strictly recommended to provide `examples` parameter).
130
+
128
131
More complex examples with arrays, nested objects and enums are available in tests: https://github.com/rodion-m/ChatGPT_API_dotnet/blob/f50d386f0b65a4ba8c1041a28bab2a1a475c2296/tests/OpenAI.ChatGpt.IntegrationTests/OpenAiClientTests/OpenAiClient_GetStructuredResponse.cs#L1
/// <param name="dialog">The chat dialog, including a user message and any system messages that set the behavior of the assistant.</param>
37
37
/// <param name="maxTokens">Optional. The maximum number of tokens in the response. Defaults to the limit of the model, minus the number of input tokens, minus 500.</param>
38
-
/// <param name="model">Optional. The name of the model to use. Defaults to "text-davinci-002" unless the message input is longer than 6000 tokens, in which case it defaults to "text-davinci-003".</param>
38
+
/// <param name="model">Optional. The name of the model to use. Defaults to <see cref="ChatCompletionModels.Gpt4"/>. It's recommended to use GPT4+.</param>
39
39
/// <param name="temperature">Controls the randomness of the assistant’s output. Ranges from 0.0 to 1.0, where 0.0 is deterministic and 1.0 is highly random. Default value is the default for the OpenAI API.</param>
40
-
/// <param name="user">Optional. The user who is having the conversation. If not specified, defaults to "system".</param>
40
+
/// <param name="user">Optional. The user ID who is having the conversation.</param>
41
41
/// <param name="requestModifier">Optional. A function that can modify the chat completion request before it is sent to the API.</param>
42
42
/// <param name="rawResponseGetter">Optional. A function that can access the raw API response.</param>
43
43
/// <param name="jsonDeserializerOptions">Optional. Custom JSON deserializer options for the deserialization. If not specified, default options with case insensitive property names are used.</param>
44
44
/// <param name="jsonSerializerOptions">Optional. Custom JSON serializer options for the serialization.</param>
45
-
/// <param name="examples">Optional. Example of the models those will be serialized using <paramref name="jsonSerializerOptions"/></param>
45
+
/// <param name="examples">Optional. Example of the models those will be serialized using <paramref name="jsonSerializerOptions"/>.</param>
46
46
/// <param name="cancellationToken">Optional. A cancellation token that can be used to cancel the operation.</param>
47
47
/// <returns>
48
48
/// A task that represents the asynchronous operation. The task result contains the deserialized object from the API response.
0 commit comments