refactor(ai-openai): redesign model metadata catalogs#399
refactor(ai-openai): redesign model metadata catalogs#399nick-potts wants to merge 6 commits intoTanStack:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughCentralizes OpenAI model metadata into typed registries and shared utilities; replaces hardcoded per-model lists with registry-driven types/validators/exports; updates adapters, realtime defaults, and validation logic to consume the registry; and adds tests plus a changeset for a minor release. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/typescript/ai-openai/src/image/image-provider-options.ts (1)
252-262:⚠️ Potential issue | 🟡 MinorInconsistent unknown model handling in
validatePrompt.
validatePromptsilently returns when the model is unknown (Line 256), whilevalidateImageSize,validateNumberOfImages, andvalidateBackgroundall throw errors for unknown models. This inconsistency could mask configuration issues for unknown models.Consider aligning the behavior:
🔧 Suggested fix for consistent error handling
export const validatePrompt = (options: ImageValidationOptions) => { if (options.prompt.length === 0) { throw new Error('Prompt cannot be empty.') } - if (!Object.hasOwn(IMAGE_MODELS, options.model)) return + if (!Object.hasOwn(IMAGE_MODELS, options.model)) { + throw new Error(`Unknown image model: ${options.model}`) + } const modelMeta = IMAGE_MODELS[options.model as keyof typeof IMAGE_MODELS] if (options.prompt.length > modelMeta.maxPromptLength) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/image/image-provider-options.ts` around lines 252 - 262, validatePrompt currently returns silently when options.model is not in IMAGE_MODELS, unlike validateImageSize/validateNumberOfImages/validateBackground which throw for unknown models; change validatePrompt (in validatePrompt function) to throw a descriptive Error when Object.hasOwn(IMAGE_MODELS, options.model) is false (e.g., "Unknown image model: {options.model}") instead of returning, so model-handling is consistent across IMAGE_MODELS checks.
🧹 Nitpick comments (2)
packages/typescript/ai-openai/src/audio/audio-provider-options.ts (2)
70-79: Same fail-open pattern applies tovalidateInstructions.As with
validateStreamFormat, unregistered models bypass instruction validation. The consistency is good, but the same forward-compatibility vs. early-error tradeoff applies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts` around lines 70 - 79, validateInstructions currently returns early when options.model is not in TTS_MODELS, allowing unregistered models to bypass instruction validation; change this to fail-fast by throwing a clear Error when the model key is unknown (instead of returning), then proceed to check modelMeta.supportsInstructions and throw the existing error if instructions are not supported; reference the validateInstructions function and the TTS_MODELS lookup so you update that conditional branch to raise an Error for unknown models rather than returning.
51-59: Consider the fail-open behavior for unknown models.When
options.modelis not inTTS_MODELS, validation silently returns without error. This means typos or future models bypassstream_formatvalidation entirely, potentially leading to less helpful runtime API errors.This appears intentional for forward compatibility, but consider whether a warning or explicit "unknown model" handling would improve developer experience.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts` around lines 51 - 59, The current validateStreamFormat function returns silently when options.model is not in TTS_MODELS, which can hide typos or unsupported models; update validateStreamFormat to detect unknown models (check Object.hasOwn(TTS_MODELS, options.model)) and if unknown and options.stream_format is present emit a warning (e.g., via console.warn or a provided logger) that the model is unrecognized and stream_format may not be supported, otherwise preserve existing behavior for known models and keep the existing error throw when modelMeta.supportsStreaming is false; reference validateStreamFormat and TTS_MODELS to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/typescript/ai-openai/src/image/image-provider-options.ts`:
- Around line 252-262: validatePrompt currently returns silently when
options.model is not in IMAGE_MODELS, unlike
validateImageSize/validateNumberOfImages/validateBackground which throw for
unknown models; change validatePrompt (in validatePrompt function) to throw a
descriptive Error when Object.hasOwn(IMAGE_MODELS, options.model) is false
(e.g., "Unknown image model: {options.model}") instead of returning, so
model-handling is consistent across IMAGE_MODELS checks.
---
Nitpick comments:
In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts`:
- Around line 70-79: validateInstructions currently returns early when
options.model is not in TTS_MODELS, allowing unregistered models to bypass
instruction validation; change this to fail-fast by throwing a clear Error when
the model key is unknown (instead of returning), then proceed to check
modelMeta.supportsInstructions and throw the existing error if instructions are
not supported; reference the validateInstructions function and the TTS_MODELS
lookup so you update that conditional branch to raise an Error for unknown
models rather than returning.
- Around line 51-59: The current validateStreamFormat function returns silently
when options.model is not in TTS_MODELS, which can hide typos or unsupported
models; update validateStreamFormat to detect unknown models (check
Object.hasOwn(TTS_MODELS, options.model)) and if unknown and
options.stream_format is present emit a warning (e.g., via console.warn or a
provided logger) that the model is unrecognized and stream_format may not be
supported, otherwise preserve existing behavior for known models and keep the
existing error throw when modelMeta.supportsStreaming is false; reference
validateStreamFormat and TTS_MODELS to locate the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 65440dc0-8666-4048-a38e-128c0cd91335
📒 Files selected for processing (24)
.changeset/major-teeth-greet.mdpackages/typescript/ai-openai/src/adapters/image.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/transcription.tspackages/typescript/ai-openai/src/adapters/tts.tspackages/typescript/ai-openai/src/adapters/video.tspackages/typescript/ai-openai/src/audio/audio-provider-options.tspackages/typescript/ai-openai/src/audio/transcription-provider-options.tspackages/typescript/ai-openai/src/image/image-provider-options.tspackages/typescript/ai-openai/src/index.tspackages/typescript/ai-openai/src/meta/realtime.tspackages/typescript/ai-openai/src/model-meta.tspackages/typescript/ai-openai/src/models/audio.tspackages/typescript/ai-openai/src/models/image.tspackages/typescript/ai-openai/src/models/index.tspackages/typescript/ai-openai/src/models/shared.tspackages/typescript/ai-openai/src/models/text.tspackages/typescript/ai-openai/src/models/video.tspackages/typescript/ai-openai/src/realtime/adapter.tspackages/typescript/ai-openai/src/realtime/token.tspackages/typescript/ai-openai/src/realtime/types.tspackages/typescript/ai-openai/src/text/text-provider-options.tspackages/typescript/ai-openai/src/video/video-provider-options.tspackages/typescript/ai-openai/tests/model-meta.test.ts
There was a problem hiding this comment.
🧹 Nitpick comments (4)
packages/typescript/ai-openai/tests/transcription-adapter.test.ts (1)
1-4: AddafterEachcleanup for mock consistency.Unlike
audio-provider-options.test.tsandvideo-adapter.test.ts, this test file lacksafterEach(() => vi.restoreAllMocks()). While not currently causing issues since mocks are localized per-test, adding cleanup maintains consistency across the test suite and prevents potential test pollution if additional tests are added later.♻️ Proposed fix
-import { describe, expect, it, vi } from 'vitest' +import { afterEach, describe, expect, it, vi } from 'vitest' import { createOpenaiTranscription } from '../src/adapters/transcription' describe('OpenAI transcription adapter', () => { + afterEach(() => { + vi.restoreAllMocks() + }) + it('defaults non-whisper models to verbose_json and maps rich responses', async () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts` around lines 1 - 4, Add an afterEach cleanup to restore mocks in the OpenAI transcription adapter test file: include afterEach(() => vi.restoreAllMocks()) in the describe block for "OpenAI transcription adapter" (the test file uses vi from vitest and the describe/it structure), matching the pattern used in audio-provider-options.test.ts and video-adapter.test.ts to prevent mock leakage between tests.packages/typescript/ai-openai/tests/video-adapter.test.ts (1)
44-75: Consider clarifying the test's intent regarding duration/seconds precedence.The test passes both
duration: 8andmodelOptions.seconds: '4', then expectsseconds: '8'in the payload. This verifies thatdurationtakes precedence overmodelOptions.seconds, but the test name "maps request options into create payloads" doesn't convey this override behavior.Consider either:
- Renaming to something like
'duration overrides modelOptions.seconds in create payloads'- Adding a separate test case that only uses
modelOptions.secondswithoutduration🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/video-adapter.test.ts` around lines 44 - 75, Rename or add tests to make the precedence behavior explicit: update the existing test in video-adapter.test.ts (the one invoking createOpenaiVideo(...) and calling adapter.createVideoJob(...)) to either rename it to "duration overrides modelOptions.seconds in create payloads" to document that duration takes precedence over modelOptions.seconds, or add a new test that passes only modelOptions.seconds (no duration) to verify seconds is used when duration is absent; ensure references remain to createOpenaiVideo and createVideoJob and that the expected payload assertions reflect the intended precedence.packages/typescript/ai-openai/src/image/image-provider-options.ts (1)
200-204: Consider extracting model metadata lookup helper.The pattern of checking
Object.hasOwn(IMAGE_MODELS, model)and casting is repeated across all validators. A helper would reduce duplication and centralize the "unknown model" error handling.Example helper
function getImageModelMeta(model: string) { if (!Object.hasOwn(IMAGE_MODELS, model)) { throw new Error(`Unknown image model: ${model}`) } return IMAGE_MODELS[model as keyof typeof IMAGE_MODELS] }Also applies to: 224-228, 242-246, 262-265
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/image/image-provider-options.ts` around lines 200 - 204, Extract a helper like getImageModelMeta(model: string) that encapsulates the existing lookup and error handling around IMAGE_MODELS (i.e., perform Object.hasOwn(IMAGE_MODELS, model) and throw `Unknown image model: ${model}` if missing, otherwise return IMAGE_MODELS[model as keyof typeof IMAGE_MODELS]); then replace the repeated pattern in the validators (where you currently check Object.hasOwn(IMAGE_MODELS, model) and assign modelMeta) with calls to getImageModelMeta(model) so all lookups use the same centralized logic and error message.packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
230-233:ExternalTextProviderOptionsis currently widened beyond per-model safety.
AnyReasoningOptionsmakesExternalTextProviderOptionsaccept broad reasoning combinations at compile time, which weakens model-specific option safety unless every call site re-narrows it.♻️ Suggested direction
-type AnyReasoningOptions = OpenAIReasoningOptionsWithConcise - -export type ExternalTextProviderOptions = OpenAIBaseOptions & - AnyReasoningOptions & +export type ExternalTextProviderOptions< + TEffort extends OpenAIReasoningEffort = OpenAIReasoningEffort, + TSummary extends OpenAIReasoningSummaryWithConcise = OpenAIReasoningSummaryWithConcise, +> = OpenAIBaseOptions & + OpenAIReasoningOptions<TEffort, TSummary> & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptionsThen specialize
TEffort/TSummaryfrom the model registry where model is known.As per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/text/text-provider-options.ts` around lines 230 - 233, ExternalTextProviderOptions is currently too wide because it composes in AnyReasoningOptions, which permits any reasoning combo at all call sites; change it to be model-specific by removing the broad AnyReasoningOptions usage and instead make ExternalTextProviderOptions generic over the selected model (or accept a model type parameter) and derive TEffort/TSummary from the model registry for that model; specifically replace the union use of AnyReasoningOptions with model-scoped reasoning types resolved from the registry so TEffort and TSummary are specialized based on the chosen model (use the registry lookup utility/type to map model -> its reasoning option types).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/typescript/ai-openai/src/image/image-provider-options.ts`:
- Around line 200-204: Extract a helper like getImageModelMeta(model: string)
that encapsulates the existing lookup and error handling around IMAGE_MODELS
(i.e., perform Object.hasOwn(IMAGE_MODELS, model) and throw `Unknown image
model: ${model}` if missing, otherwise return IMAGE_MODELS[model as keyof typeof
IMAGE_MODELS]); then replace the repeated pattern in the validators (where you
currently check Object.hasOwn(IMAGE_MODELS, model) and assign modelMeta) with
calls to getImageModelMeta(model) so all lookups use the same centralized logic
and error message.
In `@packages/typescript/ai-openai/src/text/text-provider-options.ts`:
- Around line 230-233: ExternalTextProviderOptions is currently too wide because
it composes in AnyReasoningOptions, which permits any reasoning combo at all
call sites; change it to be model-specific by removing the broad
AnyReasoningOptions usage and instead make ExternalTextProviderOptions generic
over the selected model (or accept a model type parameter) and derive
TEffort/TSummary from the model registry for that model; specifically replace
the union use of AnyReasoningOptions with model-scoped reasoning types resolved
from the registry so TEffort and TSummary are specialized based on the chosen
model (use the registry lookup utility/type to map model -> its reasoning option
types).
In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts`:
- Around line 1-4: Add an afterEach cleanup to restore mocks in the OpenAI
transcription adapter test file: include afterEach(() => vi.restoreAllMocks())
in the describe block for "OpenAI transcription adapter" (the test file uses vi
from vitest and the describe/it structure), matching the pattern used in
audio-provider-options.test.ts and video-adapter.test.ts to prevent mock leakage
between tests.
In `@packages/typescript/ai-openai/tests/video-adapter.test.ts`:
- Around line 44-75: Rename or add tests to make the precedence behavior
explicit: update the existing test in video-adapter.test.ts (the one invoking
createOpenaiVideo(...) and calling adapter.createVideoJob(...)) to either rename
it to "duration overrides modelOptions.seconds in create payloads" to document
that duration takes precedence over modelOptions.seconds, or add a new test that
passes only modelOptions.seconds (no duration) to verify seconds is used when
duration is absent; ensure references remain to createOpenaiVideo and
createVideoJob and that the expected payload assertions reflect the intended
precedence.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 37d5e312-ece6-404d-8f2a-cb3a1a997071
📒 Files selected for processing (12)
packages/typescript/ai-openai/src/audio/audio-provider-options.tspackages/typescript/ai-openai/src/image/image-provider-options.tspackages/typescript/ai-openai/src/models/shared.tspackages/typescript/ai-openai/src/models/text.tspackages/typescript/ai-openai/src/text/text-provider-options.tspackages/typescript/ai-openai/tests/audio-provider-options.test.tspackages/typescript/ai-openai/tests/image-adapter.test.tspackages/typescript/ai-openai/tests/model-meta.test.tspackages/typescript/ai-openai/tests/text-provider-options.test.tspackages/typescript/ai-openai/tests/transcription-adapter.test.tspackages/typescript/ai-openai/tests/video-adapter.test.tspackages/typescript/ai-openai/tests/video-provider-options.test.ts
✅ Files skipped from review due to trivial changes (3)
- packages/typescript/ai-openai/tests/image-adapter.test.ts
- packages/typescript/ai-openai/tests/video-provider-options.test.ts
- packages/typescript/ai-openai/tests/text-provider-options.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/typescript/ai-openai/src/audio/audio-provider-options.ts
- packages/typescript/ai-openai/src/models/text.ts
- packages/typescript/ai-openai/src/models/shared.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
packages/typescript/ai-openai/tests/transcription-adapter.test.ts (1)
27-35: Refactor repeated client-stubbing block into a local helper.The repeated deep cast + client assignment makes updates error-prone. A tiny helper keeps tests easier to maintain.
♻️ Proposed refactor
+function attachMockCreate( + adapter: unknown, + create: ReturnType<typeof vi.fn>, +) { + ;(adapter as unknown as { + client: { audio: { transcriptions: { create: unknown } } } + }).client = { + audio: { + transcriptions: { create }, + }, + } +} + const adapter = createOpenaiTranscription('gpt-4o-transcribe', 'test-api-key') - ;(adapter as unknown as { client: { audio: { transcriptions: { create: unknown } } } }).client = - { - audio: { - transcriptions: { - create, - }, - }, - } + attachMockCreate(adapter, create)Also applies to: 67-75, 101-109, 136-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts` around lines 27 - 35, Extract the repeated deep-cast and client assignment used in tests into a small helper function (e.g., createStubbedClient or stubAdapterClient) and replace the inline blocks that assign (adapter as unknown as { client: { audio: { transcriptions: { create: unknown } } } }).client = { audio: { transcriptions: { create, }, }, } with calls to that helper; the helper should accept the create mock (or an object of mocks) and perform the cast and assignment so test sites around adapter, create, and client (lines currently duplicating at the four locations) become single-line helper calls for easier maintenance.packages/typescript/ai-openai/tests/tts-adapter.test.ts (1)
11-18: Optional: extract repeated client-mocking setup into a small helper.Both tests duplicate the same adapter client patch block; extracting a helper would reduce noise and improve maintainability.
♻️ Suggested refactor
+const withMockedSpeechCreate = ( + model: 'gpt-4o-mini-tts' | 'tts-1', + create: ReturnType<typeof vi.fn>, +) => { + const adapter = createOpenaiSpeech(model, 'test-api-key') + ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client = { + audio: { speech: { create } }, + } + return adapter +} + describe('OpenAI TTS adapter', () => { it('passes supported instructions through and returns mp3 output metadata', async () => { const create = vi .fn() .mockResolvedValueOnce(new Response(Uint8Array.from([1, 2, 3]))) - const adapter = createOpenaiSpeech('gpt-4o-mini-tts', 'test-api-key') - ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client = - { - audio: { - speech: { - create, - }, - }, - } + const adapter = withMockedSpeechCreate('gpt-4o-mini-tts', create) @@ it('rejects unsupported instructions before calling the API', async () => { const create = vi.fn() - const adapter = createOpenaiSpeech('tts-1', 'test-api-key') - ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client = - { - audio: { - speech: { - create, - }, - }, - } + const adapter = withMockedSpeechCreate('tts-1', create)Also applies to: 46-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/tts-adapter.test.ts` around lines 11 - 18, Extract the repeated adapter client patch into a small helper function (e.g., mockAdapterClient) that takes the adapter and the create mock and assigns (adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client = { audio: { speech: { create } } }; replace the duplicated blocks in the tests with calls to this helper so both occurrences around adapter and audio.speech.create use the same utility.packages/typescript/ai-openai/src/models/text.ts (1)
39-42: Make reasoning support a single type-level invariant.
TextProviderOptionsForEntryignores the reasoning feature keys and reconstructs reasoning support fromentry.reasoning, sofeaturesandreasoningcan drift independently. That means a model can still expose reasoning options after losing its reasoning feature, or expose'concise'summaries without the'reasoningConcise'feature. Please encode these as one discriminated shape so the catalog itself prevents that drift.As per coding guidelines: Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety.
Also applies to: 46-59, 95-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/models/text.ts` around lines 39 - 42, The current types allow features.reasoning and entry.reasoning to drift; make reasoning support a single discriminated type by replacing the separate NonReasoningTextProviderFeatureMap and related omissions with a union that couples the feature flags and entry shape (e.g., a "noReasoning" variant with TextProviderFeatureMap lacking reasoning/reasoningConcise and an explicit "reasoning" variant with the corresponding reasoning/reasoningConcise flags), then update TextProviderOptionsForEntry to use that discriminated union so the presence of features.reasoning and features.reasoningConcise is determined by the same variant that defines entry.reasoning; change references to TextProviderFeatureMap, NonReasoningTextProviderFeatureMap, and TextProviderOptionsForEntry accordingly and ensure the union discriminant is entry.reasoning (or a dedicated literal) so the compiler enforces the invariant across model configs.packages/typescript/ai-openai/tests/model-meta.test.ts (1)
100-114: Use an exact own-key check instead oftoHaveProperty(id)for defensive clarity.While Vitest's
toHavePropertydoes interpret dotted ids as property paths (treating'gpt-5.4-2026-03-05'as a nested lookup), the current test passes because snapshot ids are stored in asnapshotsarray property, not as direct registry keys. However, using an explicit own-property check makes the intent clearer and prevents accidental regressions if snapshots are ever added as object keys.Proposed fix
+ const hasOwnKey = (value: object, key: PropertyKey) => + Object.prototype.hasOwnProperty.call(value, key) + for (const id of OPENAI_CHAT_SNAPSHOT_MODELS) { expect(OPENAI_CHAT_MODELS).toContain(id) - expect(TEXT_MODELS).not.toHaveProperty(id) + expect(hasOwnKey(TEXT_MODELS, id)).toBe(false) } for (const id of OPENAI_IMAGE_SNAPSHOT_MODELS) { expect(OPENAI_IMAGE_MODELS).toContain(id) - expect(IMAGE_MODELS).not.toHaveProperty(id) + expect(hasOwnKey(IMAGE_MODELS, id)).toBe(false) } for (const id of OPENAI_TTS_SNAPSHOT_MODELS) { expect(OPENAI_TTS_MODELS).toContain(id) - expect(TTS_MODELS).not.toHaveProperty(id) + expect(hasOwnKey(TTS_MODELS, id)).toBe(false) } for (const id of OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS) { expect(OPENAI_TRANSCRIPTION_MODELS).toContain(id) - expect(TRANSCRIPTION_MODELS).not.toHaveProperty(id) + expect(hasOwnKey(TRANSCRIPTION_MODELS, id)).toBe(false) }Also applies to: 303-305
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 100 - 114, Replace the use of Vitest's toHaveProperty for negative checks with an explicit own-property check: for each loop that iterates OPENAI_CHAT_SNAPSHOT_MODELS, OPENAI_IMAGE_SNAPSHOT_MODELS, OPENAI_TTS_SNAPSHOT_MODELS, and OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS, change the assertions like expect(TEXT_MODELS).not.toHaveProperty(id) to assert that the registry object does not have an own key (e.g. use Object.prototype.hasOwnProperty.call(TEXT_MODELS, id) and expect(...).toBe(false)); do the same for IMAGE_MODELS, TTS_MODELS and TRANSCRIPTION_MODELS so the test verifies direct own keys rather than property-path lookups.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 172-200: Replace structural inheritance checks for the optional
reasoning property with key-presence assertions: for each occurrence of
expectTypeOf<...>().toExtend<OpenAIReasoningOptions>() (e.g., GPT54Options,
O1MiniOptions, O3ProOptions, ComputerUseOptions, GPT53ChatOptions,
GPT41Options), instead assert the extracted key presence using Extract<keyof T,
'reasoning'> and compare with toEqualTypeOf<'reasoning'>() when the type should
expose the reasoning key or toEqualTypeOf<never>() when it should not; update
the tests that currently call toExtend<OpenAIReasoningOptions>() to follow the
pattern used on lines ~147–153 so they check Extract<keyof T, 'reasoning'> for
each corresponding option type.
---
Nitpick comments:
In `@packages/typescript/ai-openai/src/models/text.ts`:
- Around line 39-42: The current types allow features.reasoning and
entry.reasoning to drift; make reasoning support a single discriminated type by
replacing the separate NonReasoningTextProviderFeatureMap and related omissions
with a union that couples the feature flags and entry shape (e.g., a
"noReasoning" variant with TextProviderFeatureMap lacking
reasoning/reasoningConcise and an explicit "reasoning" variant with the
corresponding reasoning/reasoningConcise flags), then update
TextProviderOptionsForEntry to use that discriminated union so the presence of
features.reasoning and features.reasoningConcise is determined by the same
variant that defines entry.reasoning; change references to
TextProviderFeatureMap, NonReasoningTextProviderFeatureMap, and
TextProviderOptionsForEntry accordingly and ensure the union discriminant is
entry.reasoning (or a dedicated literal) so the compiler enforces the invariant
across model configs.
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 100-114: Replace the use of Vitest's toHaveProperty for negative
checks with an explicit own-property check: for each loop that iterates
OPENAI_CHAT_SNAPSHOT_MODELS, OPENAI_IMAGE_SNAPSHOT_MODELS,
OPENAI_TTS_SNAPSHOT_MODELS, and OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS, change the
assertions like expect(TEXT_MODELS).not.toHaveProperty(id) to assert that the
registry object does not have an own key (e.g. use
Object.prototype.hasOwnProperty.call(TEXT_MODELS, id) and
expect(...).toBe(false)); do the same for IMAGE_MODELS, TTS_MODELS and
TRANSCRIPTION_MODELS so the test verifies direct own keys rather than
property-path lookups.
In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts`:
- Around line 27-35: Extract the repeated deep-cast and client assignment used
in tests into a small helper function (e.g., createStubbedClient or
stubAdapterClient) and replace the inline blocks that assign (adapter as unknown
as { client: { audio: { transcriptions: { create: unknown } } } }).client = {
audio: { transcriptions: { create, }, }, } with calls to that helper; the helper
should accept the create mock (or an object of mocks) and perform the cast and
assignment so test sites around adapter, create, and client (lines currently
duplicating at the four locations) become single-line helper calls for easier
maintenance.
In `@packages/typescript/ai-openai/tests/tts-adapter.test.ts`:
- Around line 11-18: Extract the repeated adapter client patch into a small
helper function (e.g., mockAdapterClient) that takes the adapter and the create
mock and assigns (adapter as unknown as { client: { audio: { speech: { create:
unknown } } } }).client = { audio: { speech: { create } } }; replace the
duplicated blocks in the tests with calls to this helper so both occurrences
around adapter and audio.speech.create use the same utility.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5ea4fade-a789-49bd-8a47-a213f000087b
📒 Files selected for processing (10)
packages/typescript/ai-openai/src/image/image-provider-options.tspackages/typescript/ai-openai/src/models/text.tspackages/typescript/ai-openai/tests/image-adapter.test.tspackages/typescript/ai-openai/tests/model-meta.test.tspackages/typescript/ai-openai/tests/openai-adapter.test.tspackages/typescript/ai-openai/tests/realtime-adapter.test.tspackages/typescript/ai-openai/tests/realtime-token.test.tspackages/typescript/ai-openai/tests/transcription-adapter.test.tspackages/typescript/ai-openai/tests/tts-adapter.test.tspackages/typescript/ai-openai/tests/video-adapter.test.ts
✅ Files skipped from review due to trivial changes (1)
- packages/typescript/ai-openai/src/image/image-provider-options.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/typescript/ai-openai/tests/image-adapter.test.ts
- packages/typescript/ai-openai/tests/video-adapter.test.ts
| expectTypeOf<GPT54Options>().toExtend<OpenAIReasoningOptions>() | ||
| expectTypeOf<GPT54Options>().toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<GPT54Options>().toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<GPT53ChatOptions>().not.toExtend<OpenAIReasoningOptions>() | ||
| expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<GPT41Options>().not.toExtend<OpenAIReasoningOptions>() | ||
| expectTypeOf<GPT41Options>().toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<GPT41Options>().toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<GPT4TurboOptions>().not.toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<GPT4TurboOptions>().toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIStreamingOptions>() | ||
| expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<SearchPreviewOptions>().toExtend< | ||
| OpenAIStructuredOutputOptions | ||
| >() | ||
| expectTypeOf<SearchPreviewOptions>().not.toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<O1MiniOptions>().toExtend<OpenAIReasoningOptions>() | ||
| expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<O3ProOptions>().toExtend<OpenAIReasoningOptions>() | ||
| expectTypeOf<O3ProOptions>().toExtend<OpenAIStructuredOutputOptions>() | ||
| expectTypeOf<O3ProOptions>().toExtend<OpenAIToolsOptions>() | ||
| expectTypeOf<O3ProOptions>().not.toExtend<OpenAIStreamingOptions>() | ||
| expectTypeOf<ComputerUseOptions>().toExtend< | ||
| OpenAIReasoningOptionsWithConcise | ||
| >() | ||
| expectTypeOf<ComputerUseOptions>().not.toExtend<OpenAIStreamingOptions>() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "text-provider-options.ts" --path "*ai-openai*"Repository: TanStack/ai
Length of output: 288
🏁 Script executed:
cat -n packages/typescript/ai-openai/src/text/text-provider-options.ts | head -100Repository: TanStack/ai
Length of output: 6269
🏁 Script executed:
rg "OpenAIReasoningOptions" packages/typescript/ai-openai/src/text/text-provider-options.ts -A 5 -B 2Repository: TanStack/ai
Length of output: 1096
🏁 Script executed:
cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '147,153p'Repository: TanStack/ai
Length of output: 531
🏁 Script executed:
cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '140,170p'Repository: TanStack/ai
Length of output: 1928
🏁 Script executed:
cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '172,200p'Repository: TanStack/ai
Length of output: 2119
🏁 Script executed:
rg "export type OpenAIReasoningOptions" packages/typescript/ai-openai/src/text/text-provider-options.ts -A 15Repository: TanStack/ai
Length of output: 1828
🏁 Script executed:
rg "reasoning\?" packages/typescript/ai-openai/src/text/text-provider-options.ts | head -20Repository: TanStack/ai
Length of output: 70
🏁 Script executed:
rg "GPT54Options|O1MiniOptions|ComputerUseOptions" packages/typescript/ai-openai/src -B 2 -A 3 | head -50Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
fd -t f "model-meta" --path "*ai-openai*" | head -5Repository: TanStack/ai
Length of output: 288
🏁 Script executed:
fd -t f model-meta packages/typescript/ai-openaiRepository: TanStack/ai
Length of output: 157
🏁 Script executed:
rg "type GPT54Options|type O1MiniOptions|type ComputerUseOptions" packages/typescript/ai-openai -B 5 -A 20Repository: TanStack/ai
Length of output: 4386
🏁 Script executed:
rg "OpenAIChatModelProviderOptionsByName" packages/typescript/ai-openai/src -lRepository: TanStack/ai
Length of output: 196
🏁 Script executed:
cat -n packages/typescript/ai-openai/src/model-meta.ts | head -150Repository: TanStack/ai
Length of output: 3539
🏁 Script executed:
rg "export type OpenAIChatModelProviderOptionsByName" packages/typescript/ai-openai/src/model-meta.ts -A 30Repository: TanStack/ai
Length of output: 1215
🏁 Script executed:
rg "type TextProviderOptionsForEntry" packages/typescript/ai-openai/src/models/text.ts -A 20Repository: TanStack/ai
Length of output: 746
🏁 Script executed:
cat packages/typescript/ai-openai/src/models/text.ts | head -200Repository: TanStack/ai
Length of output: 5615
🏁 Script executed:
rg "'gpt-5.3-chat-latest'|'o1-mini'|'computer-use-preview'" packages/typescript/ai-openai/src/models/text.ts -A 12Repository: TanStack/ai
Length of output: 2759
🏁 Script executed:
rg "type OpenAIReasoningOptionsWithConcise|reasoningConcise" packages/typescript/ai-openai/src/models/text.ts -B 5 -A 5Repository: TanStack/ai
Length of output: 918
🏁 Script executed:
cat -n packages/typescript/ai-openai/src/models/text.ts | sed -n '70,95p'Repository: TanStack/ai
Length of output: 799
🏁 Script executed:
python3 << 'EOF'
# Let's verify TypeScript's structural assignment behavior with optional properties
import json
test_code = """
// Test 1: Empty object extends type with optional property
type OptionalProp = { optional?: string }
type EmptyObj = {}
type Test1 = EmptyObj extends OptionalProp ? true : false
// Test1 should be: true (empty object structurally extends optional property type)
// Test 2: Object without the key
type NoKey = { other: string }
type Test2 = NoKey extends OptionalProp ? true : false
// Test2 should be: true (object without key still extends optional property type)
// Test 3: Check presence of key
type NoKeyHasOptional = Extract<keyof NoKey, 'optional'>
// NoKeyHasOptional should be: never
type WithKeyHasOptional = Extract<keyof OptionalProp, 'optional'>
// WithKeyHasOptional should be: 'optional'
"""
print("TypeScript Structural Assignment Rules:")
print("========================================")
print("When a type only has optional properties:")
print("- {} extends { optional?: T } → TRUE (empty object satisfies optional-only interface)")
print("- { other: X } extends { optional?: T } → TRUE (object without the optional key still satisfies)")
print()
print("Therefore:")
print("- Models WITHOUT reasoning feature → .toExtend<OpenAIReasoningOptions>() will PASS (incorrect)")
print("- Models WITHOUT reasoning feature → .not.toExtend<OpenAIReasoningOptions>() will FAIL (wrong result)")
print()
print("Using Extract<keyof T, 'reasoning'> is more precise:")
print("- Models WITH reasoning → Extract<..., 'reasoning'> = 'reasoning'")
print("- Models WITHOUT reasoning → Extract<..., 'reasoning'> = never")
EOFRepository: TanStack/ai
Length of output: 752
🏁 Script executed:
cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '155,200p'Repository: TanStack/ai
Length of output: 3148
Replace toExtend<OpenAIReasoningOptions>() with key-presence assertions using Extract<keyof T, 'reasoning'>.
OpenAIReasoningOptions only adds an optional reasoning property, so structurally any provider-options object type (including those without the reasoning feature) extends it. This makes the positive assertions pass incorrectly and the negative assertions fail when they should pass. Use the pattern from lines 147–153: extract the key and assert toEqualTypeOf<'reasoning'>() or toEqualTypeOf<never>().
Proposed fix
+ type GPT54HasReasoning = Extract<keyof GPT54Options, 'reasoning'>
+ type GPT53ChatHasReasoning = Extract<keyof GPT53ChatOptions, 'reasoning'>
+ type GPT5ProHasReasoning = Extract<keyof GPT5ProOptions, 'reasoning'>
+ type GPT41HasReasoning = Extract<keyof GPT41Options, 'reasoning'>
+ type O1MiniHasReasoning = Extract<keyof O1MiniOptions, 'reasoning'>
+ type O3ProHasReasoning = Extract<keyof O3ProOptions, 'reasoning'>
+ type ComputerUseHasReasoning = Extract<keyof ComputerUseOptions, 'reasoning'>
+
- expectTypeOf<GPT54Options>().toExtend<OpenAIReasoningOptions>()
+ expectTypeOf<GPT54HasReasoning>().toEqualTypeOf<'reasoning'>()
expectTypeOf<GPT54Options>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT54Options>().toExtend<OpenAIToolsOptions>()
- expectTypeOf<GPT53ChatOptions>().not.toExtend<OpenAIReasoningOptions>()
+ expectTypeOf<GPT53ChatHasReasoning>().toEqualTypeOf<never>()
expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIToolsOptions>()
- expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
+ expectTypeOf<GPT5ProHasReasoning>().toEqualTypeOf<never>()
+ expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
- expectTypeOf<GPT41Options>().not.toExtend<OpenAIReasoningOptions>()
+ expectTypeOf<GPT41HasReasoning>().toEqualTypeOf<never>()
expectTypeOf<GPT41Options>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT41Options>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT4TurboOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT4TurboOptions>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIStreamingOptions>()
expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIToolsOptions>()
expectTypeOf<SearchPreviewOptions>().toExtend<
OpenAIStructuredOutputOptions
>()
expectTypeOf<SearchPreviewOptions>().not.toExtend<OpenAIToolsOptions>()
- expectTypeOf<O1MiniOptions>().toExtend<OpenAIReasoningOptions>()
+ expectTypeOf<O1MiniHasReasoning>().toEqualTypeOf<'reasoning'>()
expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIToolsOptions>()
- expectTypeOf<O3ProOptions>().toExtend<OpenAIReasoningOptions>()
+ expectTypeOf<O3ProHasReasoning>().toEqualTypeOf<'reasoning'>()
expectTypeOf<O3ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<O3ProOptions>().toExtend<OpenAIToolsOptions>()
expectTypeOf<O3ProOptions>().not.toExtend<OpenAIStreamingOptions>()
- expectTypeOf<ComputerUseOptions>().toExtend<
- OpenAIReasoningOptionsWithConcise
- >()
+ expectTypeOf<ComputerUseHasReasoning>().toEqualTypeOf<'reasoning'>()
expectTypeOf<ComputerUseOptions>().not.toExtend<OpenAIStreamingOptions>()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 172 -
200, Replace structural inheritance checks for the optional reasoning property
with key-presence assertions: for each occurrence of
expectTypeOf<...>().toExtend<OpenAIReasoningOptions>() (e.g., GPT54Options,
O1MiniOptions, O3ProOptions, ComputerUseOptions, GPT53ChatOptions,
GPT41Options), instead assert the extracted key presence using Extract<keyof T,
'reasoning'> and compare with toEqualTypeOf<'reasoning'>() when the type should
expose the reasoning key or toEqualTypeOf<never>() when it should not; update
the tests that currently call toExtend<OpenAIReasoningOptions>() to follow the
pattern used on lines ~147–153 so they check Extract<keyof T, 'reasoning'> for
each corresponding option type.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/typescript/ai-openai/tests/model-meta.test.ts (1)
71-78: Make these export checks independent fromsrc/models/shared.ts.
packages/typescript/ai-openai/src/model-meta.tsalready defines these arrays by callingsupportedIds(...),idsByStatus(...), andsnapshotIds(...)directly. Reusing the same helpers here means a bug inpackages/typescript/ai-openai/src/models/shared.tswill update the production value and the expected value together, so these assertions won't catch it. Please derive the expectations directly from the raw registries here, or move the helper-specific behavior into dedicated tests forpackages/typescript/ai-openai/src/models/shared.ts.Also applies to: 80-92, 94-100, 307-308
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 71 - 78, The test currently derives expected arrays by calling the same helpers (supportedIds, idsByStatus, snapshotIds) used to build the exported constants (OPENAI_CHAT_MODELS, OPENAI_IMAGE_MODELS, etc.), so a bug in src/models/shared.ts would mask failures; change the assertions in model-meta.test.ts to compute expectations directly from the raw registries (e.g., derive the expected chat ids by iterating/filtering TEXT_MODELS to match the intended criteria, likewise derive image/video/tts/transcription/realtime expectations from IMAGE_MODELS/VIDEO_MODELS/TTS_MODELS/TRANSCRIPTION_MODELS/REALTIME_MODELS) instead of calling supportedIds/idsByStatus/snapshotIds, and move any tests that validate helper behavior into dedicated tests for the helper functions (supportedIds, idsByStatus, snapshotIds) so export checks for OPENAI_* constants validate production values against raw registries only.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 309-311: The test uses toHaveProperty(id) which treats id as a
property path; replace that assertion with an exact-key check using hasOwn() to
match the other snapshot loops: iterate OPENAI_REALTIME_SNAPSHOT_MODELS and
assert OPENAI_REALTIME_MODELS contains id and that REALTIME_MODELS.hasOwn(id) is
false (or expect(hasOwn(REALTIME_MODELS, id)).toBe(false)) so the test checks
for a literal key rather than a property path; update the assertion referencing
OPENAI_REALTIME_SNAPSHOT_MODELS, OPENAI_REALTIME_MODELS, and REALTIME_MODELS
accordingly.
---
Nitpick comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 71-78: The test currently derives expected arrays by calling the
same helpers (supportedIds, idsByStatus, snapshotIds) used to build the exported
constants (OPENAI_CHAT_MODELS, OPENAI_IMAGE_MODELS, etc.), so a bug in
src/models/shared.ts would mask failures; change the assertions in
model-meta.test.ts to compute expectations directly from the raw registries
(e.g., derive the expected chat ids by iterating/filtering TEXT_MODELS to match
the intended criteria, likewise derive image/video/tts/transcription/realtime
expectations from
IMAGE_MODELS/VIDEO_MODELS/TTS_MODELS/TRANSCRIPTION_MODELS/REALTIME_MODELS)
instead of calling supportedIds/idsByStatus/snapshotIds, and move any tests that
validate helper behavior into dedicated tests for the helper functions
(supportedIds, idsByStatus, snapshotIds) so export checks for OPENAI_* constants
validate production values against raw registries only.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 84a62b61-3de3-48a9-a8b6-89a080895684
📒 Files selected for processing (4)
packages/typescript/ai-openai/src/models/text.tspackages/typescript/ai-openai/tests/model-meta.test.tspackages/typescript/ai-openai/tests/transcription-adapter.test.tspackages/typescript/ai-openai/tests/tts-adapter.test.ts
✅ Files skipped from review due to trivial changes (2)
- packages/typescript/ai-openai/tests/tts-adapter.test.ts
- packages/typescript/ai-openai/tests/transcription-adapter.test.ts
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
🎯 Changes
This adds the latest OpenAI models and snapshots across text, audio, image, video, and realtime.
It also replaces the old ai-openai metadata with registry-backed catalogs so there’s a single source of truth for supported models and capabilities, which avoids the silent drift that had already started to show up across types, validation, and exports.
✅ Checklist
this doesn't exist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Improvements
Bug Fixes