Skip to content

Commit 72434a6

Browse files
committed
2025/03/27 generate kotlin then modify
1 parent 4fe54a5 commit 72434a6

File tree

741 files changed

+62743
-1689
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

741 files changed

+62743
-1689
lines changed

README.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ NOTES:
88
* There is https://github.com/openai/openai-java, which OpenAI describes as
99
"The official Java library for the OpenAI API", but:
1010
1. That "official" library lags behind https://github.com/openai/openai-openapi/blob/master/openapi.yaml
11-
For example, as of 2025/02/12 is is **STILL** lacking OpenAI's Realtime API (https://platform.openai.com/docs/api-reference/realtime), which is my main use case.
11+
For example: OpenAI's Realtime API (https://platform.openai.com/docs/api-reference/realtime),
12+
which is my main use case, is in https://github.com/openai/openai-openapi/blob/master/openapi.yaml,
13+
but as of 2025/03/28 it is **STILL** not in https://github.com/openai/openai-java. :/
1214
2. `openai-java` is actually a nearly fully modernized Kotlin library, so the name
1315
`openai-java` is legacy;
1416
it really should be named `openai-kotlin`.
@@ -60,6 +62,13 @@ All of my changes can be seen at:
6062
https://github.com/swooby/openai-openapi-kotlin/pull/1/files
6163

6264
## Updates
65+
Very similar to original generation.
66+
67+
It usually takes me 1-2 hours to do this.
68+
More if there are more changes.
69+
Less is there are less changes.
70+
Keep in mind that some of this time is verifying/updating the below documentation of any changes.
71+
6372
When a new spec comes out:
6473
1. Make sure to start from a fresh/stashed checkout.
6574
2. `rm -r ./lib/src`

gradle/libs.versions.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ kotlin = "2.0.21"
33
kotlintestRunnerJunit5 = "3.4.2"
44
squareupMoshiKotlin = "1.15.1"
55
squareupOkhttpBom = "4.12.0"
6+
spotless = "7.0.2"
67

78
[libraries]
89
kotlintest-runner-junit5 = { module = "io.kotlintest:kotlintest-runner-junit5", version.ref = "kotlintestRunnerJunit5" }
@@ -12,3 +13,4 @@ squareup-okhttp3 = { module = "com.squareup.okhttp3:okhttp" }
1213

1314
[plugins]
1415
kotlin-jvm = { id = "org.jetbrains.kotlin.jvm", version.ref = "kotlin" }
16+
spotless = { id = "com.diffplug.spotless", version.ref = "spotless" }
Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
#Sun Dec 15 17:37:56 PST 2024
21
distributionBase=GRADLE_USER_HOME
32
distributionPath=wrapper/dists
43
distributionUrl=https\://services.gradle.org/distributions/gradle-8.12-bin.zip
4+
networkTimeout=10000
5+
validateDistributionUrl=true
56
zipStoreBase=GRADLE_USER_HOME
67
zipStorePath=wrapper/dists

lib/README.md

Lines changed: 201 additions & 16 deletions
Large diffs are not rendered by default.

lib/build.gradle.kts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@ import org.jetbrains.kotlin.gradle.tasks.KotlinCompilationTask
22

33
plugins {
44
alias(libs.plugins.kotlin.jvm)
5-
id("com.diffplug.spotless") version "7.0.2"
65
// id("maven-publish")
6+
alias(libs.plugins.spotless)
77
}
88

99
group = "com.openai"
@@ -32,6 +32,9 @@ dependencies {
3232
testImplementation(libs.kotlintest.runner.junit5)
3333
}
3434

35+
// Use spotless plugin to automatically format code, remove unused import, etc
36+
// To apply changes directly to the file, run `gradlew spotlessApply`
37+
// Ref: https://github.com/diffplug/spotless/tree/main/plugin-gradle
3538
spotless {
3639
kotlin {
3740
ktfmt("0.54").googleStyle().configure {

lib/docs/Annotation.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
2+
# Annotation
3+
4+
## Properties
5+
| Name | Type | Description | Notes |
6+
| ------------ | ------------- | ------------- | ------------- |
7+
| **type** | [**inline**](#Type) | The type of the file citation. Always `file_citation`. | |
8+
| **index** | **kotlin.Int** | The index of the file in the list of files. | |
9+
| **fileId** | **kotlin.String** | The ID of the file. | |
10+
| **url** | **kotlin.String** | The URL of the web resource. | |
11+
| **title** | **kotlin.String** | The title of the web resource. | |
12+
| **startIndex** | **kotlin.Int** | The index of the first character of the URL citation in the message. | |
13+
| **endIndex** | **kotlin.Int** | The index of the last character of the URL citation in the message. | |
14+
15+
16+
<a id="Type"></a>
17+
## Enum: type
18+
| Name | Value |
19+
| ---- | ----- |
20+
| type | file_citation, url_citation, file_path |
21+
22+
23+

lib/docs/AssistantObject.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
| **toolResources** | [**AssistantObjectToolResources**](AssistantObjectToolResources.md) | | [optional] |
1717
| **temperature** | [**java.math.BigDecimal**](java.math.BigDecimal.md) | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | [optional] |
1818
| **topP** | [**java.math.BigDecimal**](java.math.BigDecimal.md) | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | [optional] |
19-
| **responseFormat** | [**AssistantObjectResponseFormat**](AssistantObjectResponseFormat.md) | | [optional] |
19+
| **responseFormat** | [**AssistantsApiResponseFormatOption**](AssistantsApiResponseFormatOption.md) | | [optional] |
2020

2121

2222
<a id="`Object`"></a>

lib/docs/AssistantSupportedModels.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,10 @@
2424

2525
* `gptMinus4oMinusMiniMinus2024Minus07Minus18` (value: `"gpt-4o-mini-2024-07-18"`)
2626

27+
* `gptMinus4Period5MinusPreview` (value: `"gpt-4.5-preview"`)
28+
29+
* `gptMinus4Period5MinusPreviewMinus2025Minus02Minus27` (value: `"gpt-4.5-preview-2025-02-27"`)
30+
2731
* `gptMinus4MinusTurbo` (value: `"gpt-4-turbo"`)
2832

2933
* `gptMinus4MinusTurboMinus2024Minus04Minus09` (value: `"gpt-4-turbo-2024-04-09"`)

lib/docs/AssistantsApiResponseFormatOption.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44
## Properties
55
| Name | Type | Description | Notes |
66
| ------------ | ------------- | ------------- | ------------- |
7-
| **type** | [**inline**](#Type) | The type of response format being defined: &#x60;text&#x60; | |
8-
| **jsonSchema** | [**ResponseFormatJsonSchemaJsonSchema**](ResponseFormatJsonSchemaJsonSchema.md) | | |
7+
| **type** | [**inline**](#Type) | The type of response format being defined. Always &#x60;text&#x60;. | |
8+
| **jsonSchema** | [**JSONSchema**](JSONSchema.md) | | |
99

1010

1111
<a id="Type"></a>

lib/docs/AudioApi.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Configure ApiKeyAuth:
5757

5858
<a id="createTranscription"></a>
5959
# **createTranscription**
60-
> CreateTranscription200Response createTranscription(file, model, language, prompt, responseFormat, temperature, timestampGranularities)
60+
> CreateTranscription200Response createTranscription(file, model, language, prompt, responseFormat, temperature, include, timestampGranularities, stream)
6161
6262
Transcribes audio into the input language.
6363

@@ -74,9 +74,11 @@ val language : kotlin.String = language_example // kotlin.String | The language
7474
val prompt : kotlin.String = prompt_example // kotlin.String | An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text#prompting) should match the audio language.
7575
val responseFormat : AudioResponseFormat = // AudioResponseFormat |
7676
val temperature : java.math.BigDecimal = 8.14 // java.math.BigDecimal | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
77+
val include : kotlin.collections.List<TranscriptionInclude> = // kotlin.collections.List<TranscriptionInclude> | Additional information to include in the transcription response. `logprobs` will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. `logprobs` only works with response_format set to `json` and only with the models `gpt-4o-transcribe` and `gpt-4o-mini-transcribe`.
7778
val timestampGranularities : kotlin.collections.List<kotlin.String> = // kotlin.collections.List<kotlin.String> | The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
79+
val stream : kotlin.Boolean = true // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored.
7880
try {
79-
val result : CreateTranscription200Response = apiInstance.createTranscription(file, model, language, prompt, responseFormat, temperature, timestampGranularities)
81+
val result : CreateTranscription200Response = apiInstance.createTranscription(file, model, language, prompt, responseFormat, temperature, include, timestampGranularities, stream)
8082
println(result)
8183
} catch (e: ClientException) {
8284
println("4xx response calling AudioApi#createTranscription")
@@ -94,9 +96,11 @@ try {
9496
| **prompt** | **kotlin.String**| An optional text to guide the model&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text#prompting) should match the audio language. | [optional] |
9597
| **responseFormat** | [**AudioResponseFormat**](AudioResponseFormat.md)| | [optional] [default to json] [enum: json, text, srt, verbose_json, vtt] |
9698
| **temperature** | **java.math.BigDecimal**| The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. | [optional] [default to 0] |
99+
| **include** | [**kotlin.collections.List&lt;TranscriptionInclude&gt;**](TranscriptionInclude.md)| Additional information to include in the transcription response. &#x60;logprobs&#x60; will return the log probabilities of the tokens in the response to understand the model&#39;s confidence in the transcription. &#x60;logprobs&#x60; only works with response_format set to &#x60;json&#x60; and only with the models &#x60;gpt-4o-transcribe&#x60; and &#x60;gpt-4o-mini-transcribe&#x60;. | [optional] |
100+
| **timestampGranularities** | [**kotlin.collections.List&lt;kotlin.String&gt;**](kotlin.String.md)| The timestamp granularities to populate for this transcription. &#x60;response_format&#x60; must be set &#x60;verbose_json&#x60; to use timestamp granularities. Either or both of these options are supported: &#x60;word&#x60;, or &#x60;segment&#x60;. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. | [optional] [enum: word, segment] |
97101
| Name | Type | Description | Notes |
98102
| ------------- | ------------- | ------------- | ------------- |
99-
| **timestampGranularities** | [**kotlin.collections.List&lt;kotlin.String&gt;**](kotlin.String.md)| The timestamp granularities to populate for this transcription. &#x60;response_format&#x60; must be set &#x60;verbose_json&#x60; to use timestamp granularities. Either or both of these options are supported: &#x60;word&#x60;, or &#x60;segment&#x60;. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. | [optional] [enum: word, segment] |
103+
| **stream** | **kotlin.Boolean**| If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang&#x3D;curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the &#x60;whisper-1&#x60; model and will be ignored. | [optional] [default to false] |
100104

101105
### Return type
102106

@@ -127,9 +131,9 @@ Translates audio into English.
127131

128132
val apiInstance = AudioApi()
129133
val file : java.io.File = BINARY_DATA_HERE // java.io.File | The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
130-
val model : CreateTranscriptionRequestModel = // CreateTranscriptionRequestModel |
134+
val model : CreateTranslationRequestModel = // CreateTranslationRequestModel |
131135
val prompt : kotlin.String = prompt_example // kotlin.String | An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text#prompting) should be in English.
132-
val responseFormat : AudioResponseFormat = // AudioResponseFormat |
136+
val responseFormat : kotlin.String = responseFormat_example // kotlin.String | The format of the output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`.
133137
val temperature : java.math.BigDecimal = 8.14 // java.math.BigDecimal | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
134138
try {
135139
val result : CreateTranslation200Response = apiInstance.createTranslation(file, model, prompt, responseFormat, temperature)
@@ -145,9 +149,9 @@ try {
145149

146150
### Parameters
147151
| **file** | **java.io.File**| The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. | |
148-
| **model** | [**CreateTranscriptionRequestModel**](CreateTranscriptionRequestModel.md)| | |
152+
| **model** | [**CreateTranslationRequestModel**](CreateTranslationRequestModel.md)| | |
149153
| **prompt** | **kotlin.String**| An optional text to guide the model&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text#prompting) should be in English. | [optional] |
150-
| **responseFormat** | [**AudioResponseFormat**](AudioResponseFormat.md)| | [optional] [default to json] [enum: json, text, srt, verbose_json, vtt] |
154+
| **responseFormat** | **kotlin.String**| The format of the output, in one of these options: &#x60;json&#x60;, &#x60;text&#x60;, &#x60;srt&#x60;, &#x60;verbose_json&#x60;, or &#x60;vtt&#x60;. | [optional] [default to json] [enum: json, text, srt, verbose_json, vtt] |
151155
| Name | Type | Description | Notes |
152156
| ------------- | ------------- | ------------- | ------------- |
153157
| **temperature** | **java.math.BigDecimal**| The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. | [optional] [default to 0] |

0 commit comments

Comments
 (0)