Skip to content

refactor(ai-openai): redesign model metadata catalogs#399

Open
nick-potts wants to merge 6 commits intoTanStack:mainfrom
nick-potts:feat/openai-model-meta-catalogs
Open

refactor(ai-openai): redesign model metadata catalogs#399
nick-potts wants to merge 6 commits intoTanStack:mainfrom
nick-potts:feat/openai-model-meta-catalogs

Conversation

@nick-potts
Copy link

@nick-potts nick-potts commented Mar 25, 2026

🎯 Changes

This adds the latest OpenAI models and snapshots across text, audio, image, video, and realtime.

It also replaces the old ai-openai metadata with registry-backed catalogs so there’s a single source of truth for supported models and capabilities, which avoids the silent drift that had already started to show up across types, validation, and exports.

✅ Checklist

  • I have followed the steps in the Contributing guide.
    this doesn't exist
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features

    • Unified model registry across modalities with additional image, audio, and video model entries.
  • Improvements

    • Metadata-driven validation for model options and clearer documented supported models (image, TTS, realtime).
    • Realtime default switched to a newer realtime model.
  • Bug Fixes

    • Safer transcription handling with robust fallbacks when upstream responses lack expected fields.
    • Validation now warns or rejects more predictably for unknown/unsupported models.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Centralizes OpenAI model metadata into typed registries and shared utilities; replaces hardcoded per-model lists with registry-driven types/validators/exports; updates adapters, realtime defaults, and validation logic to consume the registry; and adds tests plus a changeset for a minor release.

Changes

Cohort / File(s) Summary
Registry core & public entry
packages/typescript/ai-openai/src/models/shared.ts, packages/typescript/ai-openai/src/models/index.ts
Add typed registry primitives, lifecycle/docs/limits types, utility functions (supportedIds, snapshotIds, idsByStatus) and central models entrypoint exports.
Model registries (new)
packages/typescript/ai-openai/src/models/text.ts, .../image.ts, .../audio.ts, .../video.ts, .../index.ts
Introduce modality registries (TEXT_MODELS, IMAGE_MODELS, TTS_MODELS, TRANSCRIPTION_MODELS, REALTIME_MODELS, VIDEO_MODELS) with typed specs, snapshots, statuses, feature flags, and docs metadata.
Model meta & top-level exports
packages/typescript/ai-openai/src/model-meta.ts, packages/typescript/ai-openai/src/meta/realtime.ts, packages/typescript/ai-openai/src/index.ts
Refactor to derive public model arrays/types and realtime sets from registries; add many registry-derived constants and re-exported types; remove manual per-model lists.
Provider options & validators
packages/typescript/ai-openai/src/image/image-provider-options.ts, .../video/video-provider-options.ts, .../audio/audio-provider-options.ts, .../audio/transcription-provider-options.ts, .../text/text-provider-options.ts
Replace hardcoded validation data with registry lookups; validators now verify model existence and consult model metadata (sizes, maxImages, supportsBackground, supportsStreaming, supportsInstructions, maxPromptLength). Some formerly exported per-model type maps removed.
Adapters & minor API changes
packages/typescript/ai-openai/src/adapters/text.ts, .../image.ts, .../video.ts, .../tts.ts, .../transcription.ts
Reorganize imports to use registry-derived types; parameterize text provider options by model; video adapter stores clientConfig for fallback values; transcription request construction refactored and response text access hardened; JSDoc model lists updated.
Realtime surface changes
packages/typescript/ai-openai/src/realtime/adapter.ts, .../token.ts, .../types.ts
Change default realtime model to gpt-realtime-1.5; tighten session tool typing; replace literal realtime union with catalog-derived type.
Tests added/updated
packages/typescript/ai-openai/tests/...
packages/typescript/ai-openai/tests/model-meta.test.ts, transcription-adapter.test.ts, video-adapter.test.ts, audio-provider-options.test.ts, image-adapter.test.ts, text-provider-options.test.ts, realtime-*.test.ts, openai-adapter.test.ts, tts-adapter.test.ts
Add registry-driven test suite and multiple adapter/validation tests asserting registry-derived exports, validation behavior, request construction, and response mapping.
Changelog / changeset
.changeset/major-teeth-greet.md
Add a changeset marking a minor release documenting the introduction of a single-source-of-truth model registry covering text, image, video, and audio models.
Minor type/export removals
packages/typescript/ai-openai/src/image/image-provider-options.ts, packages/typescript/ai-openai/src/video/video-provider-options.ts, packages/typescript/ai-openai/src/index.ts
Remove several formerly exported per-model provider-options/size maps and re-export registry-derived equivalents from model-meta.
Small adapter doc/type tweaks
packages/typescript/ai-openai/src/adapters/image.ts, .../tts.ts, .../realtime/*
Update JSDoc model lists and reorganize some type-only imports; no runtime logic changes except realtime defaults and minor typing refinements.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐇 I hopped through registries with carrot-bright code,

One source of truth now maps every model's road.
Types snug in burrows, validators in place,
Tests sniff the tunnels, adapters keep pace.
A little rabbit cheer — ship the registry's grace!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'refactor(ai-openai): redesign model metadata catalogs' clearly and concisely describes the main change: replacing manual metadata with registry-backed catalogs for OpenAI models.
Description check ✅ Passed The description addresses the primary changes (new models, registry-backed catalogs for single source of truth) and includes a completed changeset release impact, though the Contributing guide step is incomplete.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai-openai/src/image/image-provider-options.ts (1)

252-262: ⚠️ Potential issue | 🟡 Minor

Inconsistent unknown model handling in validatePrompt.

validatePrompt silently returns when the model is unknown (Line 256), while validateImageSize, validateNumberOfImages, and validateBackground all throw errors for unknown models. This inconsistency could mask configuration issues for unknown models.

Consider aligning the behavior:

🔧 Suggested fix for consistent error handling
 export const validatePrompt = (options: ImageValidationOptions) => {
   if (options.prompt.length === 0) {
     throw new Error('Prompt cannot be empty.')
   }
-  if (!Object.hasOwn(IMAGE_MODELS, options.model)) return
+  if (!Object.hasOwn(IMAGE_MODELS, options.model)) {
+    throw new Error(`Unknown image model: ${options.model}`)
+  }
   const modelMeta = IMAGE_MODELS[options.model as keyof typeof IMAGE_MODELS]
   if (options.prompt.length > modelMeta.maxPromptLength) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/image/image-provider-options.ts` around
lines 252 - 262, validatePrompt currently returns silently when options.model is
not in IMAGE_MODELS, unlike
validateImageSize/validateNumberOfImages/validateBackground which throw for
unknown models; change validatePrompt (in validatePrompt function) to throw a
descriptive Error when Object.hasOwn(IMAGE_MODELS, options.model) is false
(e.g., "Unknown image model: {options.model}") instead of returning, so
model-handling is consistent across IMAGE_MODELS checks.
🧹 Nitpick comments (2)
packages/typescript/ai-openai/src/audio/audio-provider-options.ts (2)

70-79: Same fail-open pattern applies to validateInstructions.

As with validateStreamFormat, unregistered models bypass instruction validation. The consistency is good, but the same forward-compatibility vs. early-error tradeoff applies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts` around
lines 70 - 79, validateInstructions currently returns early when options.model
is not in TTS_MODELS, allowing unregistered models to bypass instruction
validation; change this to fail-fast by throwing a clear Error when the model
key is unknown (instead of returning), then proceed to check
modelMeta.supportsInstructions and throw the existing error if instructions are
not supported; reference the validateInstructions function and the TTS_MODELS
lookup so you update that conditional branch to raise an Error for unknown
models rather than returning.

51-59: Consider the fail-open behavior for unknown models.

When options.model is not in TTS_MODELS, validation silently returns without error. This means typos or future models bypass stream_format validation entirely, potentially leading to less helpful runtime API errors.

This appears intentional for forward compatibility, but consider whether a warning or explicit "unknown model" handling would improve developer experience.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts` around
lines 51 - 59, The current validateStreamFormat function returns silently when
options.model is not in TTS_MODELS, which can hide typos or unsupported models;
update validateStreamFormat to detect unknown models (check
Object.hasOwn(TTS_MODELS, options.model)) and if unknown and
options.stream_format is present emit a warning (e.g., via console.warn or a
provided logger) that the model is unrecognized and stream_format may not be
supported, otherwise preserve existing behavior for known models and keep the
existing error throw when modelMeta.supportsStreaming is false; reference
validateStreamFormat and TTS_MODELS to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@packages/typescript/ai-openai/src/image/image-provider-options.ts`:
- Around line 252-262: validatePrompt currently returns silently when
options.model is not in IMAGE_MODELS, unlike
validateImageSize/validateNumberOfImages/validateBackground which throw for
unknown models; change validatePrompt (in validatePrompt function) to throw a
descriptive Error when Object.hasOwn(IMAGE_MODELS, options.model) is false
(e.g., "Unknown image model: {options.model}") instead of returning, so
model-handling is consistent across IMAGE_MODELS checks.

---

Nitpick comments:
In `@packages/typescript/ai-openai/src/audio/audio-provider-options.ts`:
- Around line 70-79: validateInstructions currently returns early when
options.model is not in TTS_MODELS, allowing unregistered models to bypass
instruction validation; change this to fail-fast by throwing a clear Error when
the model key is unknown (instead of returning), then proceed to check
modelMeta.supportsInstructions and throw the existing error if instructions are
not supported; reference the validateInstructions function and the TTS_MODELS
lookup so you update that conditional branch to raise an Error for unknown
models rather than returning.
- Around line 51-59: The current validateStreamFormat function returns silently
when options.model is not in TTS_MODELS, which can hide typos or unsupported
models; update validateStreamFormat to detect unknown models (check
Object.hasOwn(TTS_MODELS, options.model)) and if unknown and
options.stream_format is present emit a warning (e.g., via console.warn or a
provided logger) that the model is unrecognized and stream_format may not be
supported, otherwise preserve existing behavior for known models and keep the
existing error throw when modelMeta.supportsStreaming is false; reference
validateStreamFormat and TTS_MODELS to locate the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 65440dc0-8666-4048-a38e-128c0cd91335

📥 Commits

Reviewing files that changed from the base of the PR and between c3583e3 and 8debfd8.

📒 Files selected for processing (24)
  • .changeset/major-teeth-greet.md
  • packages/typescript/ai-openai/src/adapters/image.ts
  • packages/typescript/ai-openai/src/adapters/text.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai-openai/src/audio/audio-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai-openai/src/image/image-provider-options.ts
  • packages/typescript/ai-openai/src/index.ts
  • packages/typescript/ai-openai/src/meta/realtime.ts
  • packages/typescript/ai-openai/src/model-meta.ts
  • packages/typescript/ai-openai/src/models/audio.ts
  • packages/typescript/ai-openai/src/models/image.ts
  • packages/typescript/ai-openai/src/models/index.ts
  • packages/typescript/ai-openai/src/models/shared.ts
  • packages/typescript/ai-openai/src/models/text.ts
  • packages/typescript/ai-openai/src/models/video.ts
  • packages/typescript/ai-openai/src/realtime/adapter.ts
  • packages/typescript/ai-openai/src/realtime/token.ts
  • packages/typescript/ai-openai/src/realtime/types.ts
  • packages/typescript/ai-openai/src/text/text-provider-options.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-openai/tests/model-meta.test.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
packages/typescript/ai-openai/tests/transcription-adapter.test.ts (1)

1-4: Add afterEach cleanup for mock consistency.

Unlike audio-provider-options.test.ts and video-adapter.test.ts, this test file lacks afterEach(() => vi.restoreAllMocks()). While not currently causing issues since mocks are localized per-test, adding cleanup maintains consistency across the test suite and prevents potential test pollution if additional tests are added later.

♻️ Proposed fix
-import { describe, expect, it, vi } from 'vitest'
+import { afterEach, describe, expect, it, vi } from 'vitest'
 import { createOpenaiTranscription } from '../src/adapters/transcription'

 describe('OpenAI transcription adapter', () => {
+  afterEach(() => {
+    vi.restoreAllMocks()
+  })
+
   it('defaults non-whisper models to verbose_json and maps rich responses', async () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts` around
lines 1 - 4, Add an afterEach cleanup to restore mocks in the OpenAI
transcription adapter test file: include afterEach(() => vi.restoreAllMocks())
in the describe block for "OpenAI transcription adapter" (the test file uses vi
from vitest and the describe/it structure), matching the pattern used in
audio-provider-options.test.ts and video-adapter.test.ts to prevent mock leakage
between tests.
packages/typescript/ai-openai/tests/video-adapter.test.ts (1)

44-75: Consider clarifying the test's intent regarding duration/seconds precedence.

The test passes both duration: 8 and modelOptions.seconds: '4', then expects seconds: '8' in the payload. This verifies that duration takes precedence over modelOptions.seconds, but the test name "maps request options into create payloads" doesn't convey this override behavior.

Consider either:

  • Renaming to something like 'duration overrides modelOptions.seconds in create payloads'
  • Adding a separate test case that only uses modelOptions.seconds without duration
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/video-adapter.test.ts` around lines 44 -
75, Rename or add tests to make the precedence behavior explicit: update the
existing test in video-adapter.test.ts (the one invoking createOpenaiVideo(...)
and calling adapter.createVideoJob(...)) to either rename it to "duration
overrides modelOptions.seconds in create payloads" to document that duration
takes precedence over modelOptions.seconds, or add a new test that passes only
modelOptions.seconds (no duration) to verify seconds is used when duration is
absent; ensure references remain to createOpenaiVideo and createVideoJob and
that the expected payload assertions reflect the intended precedence.
packages/typescript/ai-openai/src/image/image-provider-options.ts (1)

200-204: Consider extracting model metadata lookup helper.

The pattern of checking Object.hasOwn(IMAGE_MODELS, model) and casting is repeated across all validators. A helper would reduce duplication and centralize the "unknown model" error handling.

Example helper
function getImageModelMeta(model: string) {
  if (!Object.hasOwn(IMAGE_MODELS, model)) {
    throw new Error(`Unknown image model: ${model}`)
  }
  return IMAGE_MODELS[model as keyof typeof IMAGE_MODELS]
}

Also applies to: 224-228, 242-246, 262-265

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/image/image-provider-options.ts` around
lines 200 - 204, Extract a helper like getImageModelMeta(model: string) that
encapsulates the existing lookup and error handling around IMAGE_MODELS (i.e.,
perform Object.hasOwn(IMAGE_MODELS, model) and throw `Unknown image model:
${model}` if missing, otherwise return IMAGE_MODELS[model as keyof typeof
IMAGE_MODELS]); then replace the repeated pattern in the validators (where you
currently check Object.hasOwn(IMAGE_MODELS, model) and assign modelMeta) with
calls to getImageModelMeta(model) so all lookups use the same centralized logic
and error message.
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)

230-233: ExternalTextProviderOptions is currently widened beyond per-model safety.

AnyReasoningOptions makes ExternalTextProviderOptions accept broad reasoning combinations at compile time, which weakens model-specific option safety unless every call site re-narrows it.

♻️ Suggested direction
-type AnyReasoningOptions = OpenAIReasoningOptionsWithConcise
-
-export type ExternalTextProviderOptions = OpenAIBaseOptions &
-  AnyReasoningOptions &
+export type ExternalTextProviderOptions<
+  TEffort extends OpenAIReasoningEffort = OpenAIReasoningEffort,
+  TSummary extends OpenAIReasoningSummaryWithConcise = OpenAIReasoningSummaryWithConcise,
+> = OpenAIBaseOptions &
+  OpenAIReasoningOptions<TEffort, TSummary> &
   OpenAIStructuredOutputOptions &
   OpenAIToolsOptions &
   OpenAIStreamingOptions &
   OpenAIMetadataOptions

Then specialize TEffort/TSummary from the model registry where model is known.

As per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/text/text-provider-options.ts` around lines
230 - 233, ExternalTextProviderOptions is currently too wide because it composes
in AnyReasoningOptions, which permits any reasoning combo at all call sites;
change it to be model-specific by removing the broad AnyReasoningOptions usage
and instead make ExternalTextProviderOptions generic over the selected model (or
accept a model type parameter) and derive TEffort/TSummary from the model
registry for that model; specifically replace the union use of
AnyReasoningOptions with model-scoped reasoning types resolved from the registry
so TEffort and TSummary are specialized based on the chosen model (use the
registry lookup utility/type to map model -> its reasoning option types).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/typescript/ai-openai/src/image/image-provider-options.ts`:
- Around line 200-204: Extract a helper like getImageModelMeta(model: string)
that encapsulates the existing lookup and error handling around IMAGE_MODELS
(i.e., perform Object.hasOwn(IMAGE_MODELS, model) and throw `Unknown image
model: ${model}` if missing, otherwise return IMAGE_MODELS[model as keyof typeof
IMAGE_MODELS]); then replace the repeated pattern in the validators (where you
currently check Object.hasOwn(IMAGE_MODELS, model) and assign modelMeta) with
calls to getImageModelMeta(model) so all lookups use the same centralized logic
and error message.

In `@packages/typescript/ai-openai/src/text/text-provider-options.ts`:
- Around line 230-233: ExternalTextProviderOptions is currently too wide because
it composes in AnyReasoningOptions, which permits any reasoning combo at all
call sites; change it to be model-specific by removing the broad
AnyReasoningOptions usage and instead make ExternalTextProviderOptions generic
over the selected model (or accept a model type parameter) and derive
TEffort/TSummary from the model registry for that model; specifically replace
the union use of AnyReasoningOptions with model-scoped reasoning types resolved
from the registry so TEffort and TSummary are specialized based on the chosen
model (use the registry lookup utility/type to map model -> its reasoning option
types).

In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts`:
- Around line 1-4: Add an afterEach cleanup to restore mocks in the OpenAI
transcription adapter test file: include afterEach(() => vi.restoreAllMocks())
in the describe block for "OpenAI transcription adapter" (the test file uses vi
from vitest and the describe/it structure), matching the pattern used in
audio-provider-options.test.ts and video-adapter.test.ts to prevent mock leakage
between tests.

In `@packages/typescript/ai-openai/tests/video-adapter.test.ts`:
- Around line 44-75: Rename or add tests to make the precedence behavior
explicit: update the existing test in video-adapter.test.ts (the one invoking
createOpenaiVideo(...) and calling adapter.createVideoJob(...)) to either rename
it to "duration overrides modelOptions.seconds in create payloads" to document
that duration takes precedence over modelOptions.seconds, or add a new test that
passes only modelOptions.seconds (no duration) to verify seconds is used when
duration is absent; ensure references remain to createOpenaiVideo and
createVideoJob and that the expected payload assertions reflect the intended
precedence.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 37d5e312-ece6-404d-8f2a-cb3a1a997071

📥 Commits

Reviewing files that changed from the base of the PR and between 8debfd8 and e25b306.

📒 Files selected for processing (12)
  • packages/typescript/ai-openai/src/audio/audio-provider-options.ts
  • packages/typescript/ai-openai/src/image/image-provider-options.ts
  • packages/typescript/ai-openai/src/models/shared.ts
  • packages/typescript/ai-openai/src/models/text.ts
  • packages/typescript/ai-openai/src/text/text-provider-options.ts
  • packages/typescript/ai-openai/tests/audio-provider-options.test.ts
  • packages/typescript/ai-openai/tests/image-adapter.test.ts
  • packages/typescript/ai-openai/tests/model-meta.test.ts
  • packages/typescript/ai-openai/tests/text-provider-options.test.ts
  • packages/typescript/ai-openai/tests/transcription-adapter.test.ts
  • packages/typescript/ai-openai/tests/video-adapter.test.ts
  • packages/typescript/ai-openai/tests/video-provider-options.test.ts
✅ Files skipped from review due to trivial changes (3)
  • packages/typescript/ai-openai/tests/image-adapter.test.ts
  • packages/typescript/ai-openai/tests/video-provider-options.test.ts
  • packages/typescript/ai-openai/tests/text-provider-options.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/typescript/ai-openai/src/audio/audio-provider-options.ts
  • packages/typescript/ai-openai/src/models/text.ts
  • packages/typescript/ai-openai/src/models/shared.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
packages/typescript/ai-openai/tests/transcription-adapter.test.ts (1)

27-35: Refactor repeated client-stubbing block into a local helper.

The repeated deep cast + client assignment makes updates error-prone. A tiny helper keeps tests easier to maintain.

♻️ Proposed refactor
+function attachMockCreate(
+  adapter: unknown,
+  create: ReturnType<typeof vi.fn>,
+) {
+  ;(adapter as unknown as {
+    client: { audio: { transcriptions: { create: unknown } } }
+  }).client = {
+    audio: {
+      transcriptions: { create },
+    },
+  }
+}
+
   const adapter = createOpenaiTranscription('gpt-4o-transcribe', 'test-api-key')
-  ;(adapter as unknown as { client: { audio: { transcriptions: { create: unknown } } } }).client =
-    {
-      audio: {
-        transcriptions: {
-          create,
-        },
-      },
-    }
+  attachMockCreate(adapter, create)

Also applies to: 67-75, 101-109, 136-144

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts` around
lines 27 - 35, Extract the repeated deep-cast and client assignment used in
tests into a small helper function (e.g., createStubbedClient or
stubAdapterClient) and replace the inline blocks that assign (adapter as unknown
as { client: { audio: { transcriptions: { create: unknown } } } }).client = {
audio: { transcriptions: { create, }, }, } with calls to that helper; the helper
should accept the create mock (or an object of mocks) and perform the cast and
assignment so test sites around adapter, create, and client (lines currently
duplicating at the four locations) become single-line helper calls for easier
maintenance.
packages/typescript/ai-openai/tests/tts-adapter.test.ts (1)

11-18: Optional: extract repeated client-mocking setup into a small helper.

Both tests duplicate the same adapter client patch block; extracting a helper would reduce noise and improve maintainability.

♻️ Suggested refactor
+const withMockedSpeechCreate = (
+  model: 'gpt-4o-mini-tts' | 'tts-1',
+  create: ReturnType<typeof vi.fn>,
+) => {
+  const adapter = createOpenaiSpeech(model, 'test-api-key')
+  ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client = {
+    audio: { speech: { create } },
+  }
+  return adapter
+}
+
 describe('OpenAI TTS adapter', () => {
   it('passes supported instructions through and returns mp3 output metadata', async () => {
     const create = vi
       .fn()
       .mockResolvedValueOnce(new Response(Uint8Array.from([1, 2, 3])))
 
-    const adapter = createOpenaiSpeech('gpt-4o-mini-tts', 'test-api-key')
-    ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client =
-      {
-        audio: {
-          speech: {
-            create,
-          },
-        },
-      }
+    const adapter = withMockedSpeechCreate('gpt-4o-mini-tts', create)
@@
   it('rejects unsupported instructions before calling the API', async () => {
     const create = vi.fn()
 
-    const adapter = createOpenaiSpeech('tts-1', 'test-api-key')
-    ;(adapter as unknown as { client: { audio: { speech: { create: unknown } } } }).client =
-      {
-        audio: {
-          speech: {
-            create,
-          },
-        },
-      }
+    const adapter = withMockedSpeechCreate('tts-1', create)

Also applies to: 46-53

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/tts-adapter.test.ts` around lines 11 -
18, Extract the repeated adapter client patch into a small helper function
(e.g., mockAdapterClient) that takes the adapter and the create mock and assigns
(adapter as unknown as { client: { audio: { speech: { create: unknown } } }
}).client = { audio: { speech: { create } } }; replace the duplicated blocks in
the tests with calls to this helper so both occurrences around adapter and
audio.speech.create use the same utility.
packages/typescript/ai-openai/src/models/text.ts (1)

39-42: Make reasoning support a single type-level invariant.

TextProviderOptionsForEntry ignores the reasoning feature keys and reconstructs reasoning support from entry.reasoning, so features and reasoning can drift independently. That means a model can still expose reasoning options after losing its reasoning feature, or expose 'concise' summaries without the 'reasoningConcise' feature. Please encode these as one discriminated shape so the catalog itself prevents that drift.

As per coding guidelines: Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety.

Also applies to: 46-59, 95-115

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/models/text.ts` around lines 39 - 42, The
current types allow features.reasoning and entry.reasoning to drift; make
reasoning support a single discriminated type by replacing the separate
NonReasoningTextProviderFeatureMap and related omissions with a union that
couples the feature flags and entry shape (e.g., a "noReasoning" variant with
TextProviderFeatureMap lacking reasoning/reasoningConcise and an explicit
"reasoning" variant with the corresponding reasoning/reasoningConcise flags),
then update TextProviderOptionsForEntry to use that discriminated union so the
presence of features.reasoning and features.reasoningConcise is determined by
the same variant that defines entry.reasoning; change references to
TextProviderFeatureMap, NonReasoningTextProviderFeatureMap, and
TextProviderOptionsForEntry accordingly and ensure the union discriminant is
entry.reasoning (or a dedicated literal) so the compiler enforces the invariant
across model configs.
packages/typescript/ai-openai/tests/model-meta.test.ts (1)

100-114: Use an exact own-key check instead of toHaveProperty(id) for defensive clarity.

While Vitest's toHaveProperty does interpret dotted ids as property paths (treating 'gpt-5.4-2026-03-05' as a nested lookup), the current test passes because snapshot ids are stored in a snapshots array property, not as direct registry keys. However, using an explicit own-property check makes the intent clearer and prevents accidental regressions if snapshots are ever added as object keys.

Proposed fix
+  const hasOwnKey = (value: object, key: PropertyKey) =>
+    Object.prototype.hasOwnProperty.call(value, key)
+
     for (const id of OPENAI_CHAT_SNAPSHOT_MODELS) {
       expect(OPENAI_CHAT_MODELS).toContain(id)
-      expect(TEXT_MODELS).not.toHaveProperty(id)
+      expect(hasOwnKey(TEXT_MODELS, id)).toBe(false)
     }
     for (const id of OPENAI_IMAGE_SNAPSHOT_MODELS) {
       expect(OPENAI_IMAGE_MODELS).toContain(id)
-      expect(IMAGE_MODELS).not.toHaveProperty(id)
+      expect(hasOwnKey(IMAGE_MODELS, id)).toBe(false)
     }
     for (const id of OPENAI_TTS_SNAPSHOT_MODELS) {
       expect(OPENAI_TTS_MODELS).toContain(id)
-      expect(TTS_MODELS).not.toHaveProperty(id)
+      expect(hasOwnKey(TTS_MODELS, id)).toBe(false)
     }
     for (const id of OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS) {
       expect(OPENAI_TRANSCRIPTION_MODELS).toContain(id)
-      expect(TRANSCRIPTION_MODELS).not.toHaveProperty(id)
+      expect(hasOwnKey(TRANSCRIPTION_MODELS, id)).toBe(false)
     }

Also applies to: 303-305

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 100 -
114, Replace the use of Vitest's toHaveProperty for negative checks with an
explicit own-property check: for each loop that iterates
OPENAI_CHAT_SNAPSHOT_MODELS, OPENAI_IMAGE_SNAPSHOT_MODELS,
OPENAI_TTS_SNAPSHOT_MODELS, and OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS, change the
assertions like expect(TEXT_MODELS).not.toHaveProperty(id) to assert that the
registry object does not have an own key (e.g. use
Object.prototype.hasOwnProperty.call(TEXT_MODELS, id) and
expect(...).toBe(false)); do the same for IMAGE_MODELS, TTS_MODELS and
TRANSCRIPTION_MODELS so the test verifies direct own keys rather than
property-path lookups.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 172-200: Replace structural inheritance checks for the optional
reasoning property with key-presence assertions: for each occurrence of
expectTypeOf<...>().toExtend<OpenAIReasoningOptions>() (e.g., GPT54Options,
O1MiniOptions, O3ProOptions, ComputerUseOptions, GPT53ChatOptions,
GPT41Options), instead assert the extracted key presence using Extract<keyof T,
'reasoning'> and compare with toEqualTypeOf<'reasoning'>() when the type should
expose the reasoning key or toEqualTypeOf<never>() when it should not; update
the tests that currently call toExtend<OpenAIReasoningOptions>() to follow the
pattern used on lines ~147–153 so they check Extract<keyof T, 'reasoning'> for
each corresponding option type.

---

Nitpick comments:
In `@packages/typescript/ai-openai/src/models/text.ts`:
- Around line 39-42: The current types allow features.reasoning and
entry.reasoning to drift; make reasoning support a single discriminated type by
replacing the separate NonReasoningTextProviderFeatureMap and related omissions
with a union that couples the feature flags and entry shape (e.g., a
"noReasoning" variant with TextProviderFeatureMap lacking
reasoning/reasoningConcise and an explicit "reasoning" variant with the
corresponding reasoning/reasoningConcise flags), then update
TextProviderOptionsForEntry to use that discriminated union so the presence of
features.reasoning and features.reasoningConcise is determined by the same
variant that defines entry.reasoning; change references to
TextProviderFeatureMap, NonReasoningTextProviderFeatureMap, and
TextProviderOptionsForEntry accordingly and ensure the union discriminant is
entry.reasoning (or a dedicated literal) so the compiler enforces the invariant
across model configs.

In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 100-114: Replace the use of Vitest's toHaveProperty for negative
checks with an explicit own-property check: for each loop that iterates
OPENAI_CHAT_SNAPSHOT_MODELS, OPENAI_IMAGE_SNAPSHOT_MODELS,
OPENAI_TTS_SNAPSHOT_MODELS, and OPENAI_TRANSCRIPTION_SNAPSHOT_MODELS, change the
assertions like expect(TEXT_MODELS).not.toHaveProperty(id) to assert that the
registry object does not have an own key (e.g. use
Object.prototype.hasOwnProperty.call(TEXT_MODELS, id) and
expect(...).toBe(false)); do the same for IMAGE_MODELS, TTS_MODELS and
TRANSCRIPTION_MODELS so the test verifies direct own keys rather than
property-path lookups.

In `@packages/typescript/ai-openai/tests/transcription-adapter.test.ts`:
- Around line 27-35: Extract the repeated deep-cast and client assignment used
in tests into a small helper function (e.g., createStubbedClient or
stubAdapterClient) and replace the inline blocks that assign (adapter as unknown
as { client: { audio: { transcriptions: { create: unknown } } } }).client = {
audio: { transcriptions: { create, }, }, } with calls to that helper; the helper
should accept the create mock (or an object of mocks) and perform the cast and
assignment so test sites around adapter, create, and client (lines currently
duplicating at the four locations) become single-line helper calls for easier
maintenance.

In `@packages/typescript/ai-openai/tests/tts-adapter.test.ts`:
- Around line 11-18: Extract the repeated adapter client patch into a small
helper function (e.g., mockAdapterClient) that takes the adapter and the create
mock and assigns (adapter as unknown as { client: { audio: { speech: { create:
unknown } } } }).client = { audio: { speech: { create } } }; replace the
duplicated blocks in the tests with calls to this helper so both occurrences
around adapter and audio.speech.create use the same utility.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5ea4fade-a789-49bd-8a47-a213f000087b

📥 Commits

Reviewing files that changed from the base of the PR and between e25b306 and 6e37670.

📒 Files selected for processing (10)
  • packages/typescript/ai-openai/src/image/image-provider-options.ts
  • packages/typescript/ai-openai/src/models/text.ts
  • packages/typescript/ai-openai/tests/image-adapter.test.ts
  • packages/typescript/ai-openai/tests/model-meta.test.ts
  • packages/typescript/ai-openai/tests/openai-adapter.test.ts
  • packages/typescript/ai-openai/tests/realtime-adapter.test.ts
  • packages/typescript/ai-openai/tests/realtime-token.test.ts
  • packages/typescript/ai-openai/tests/transcription-adapter.test.ts
  • packages/typescript/ai-openai/tests/tts-adapter.test.ts
  • packages/typescript/ai-openai/tests/video-adapter.test.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/typescript/ai-openai/src/image/image-provider-options.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/typescript/ai-openai/tests/image-adapter.test.ts
  • packages/typescript/ai-openai/tests/video-adapter.test.ts

Comment on lines +172 to +200
expectTypeOf<GPT54Options>().toExtend<OpenAIReasoningOptions>()
expectTypeOf<GPT54Options>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT54Options>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT53ChatOptions>().not.toExtend<OpenAIReasoningOptions>()
expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT41Options>().not.toExtend<OpenAIReasoningOptions>()
expectTypeOf<GPT41Options>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT41Options>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT4TurboOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<GPT4TurboOptions>().toExtend<OpenAIToolsOptions>()
expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIStreamingOptions>()
expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIToolsOptions>()
expectTypeOf<SearchPreviewOptions>().toExtend<
OpenAIStructuredOutputOptions
>()
expectTypeOf<SearchPreviewOptions>().not.toExtend<OpenAIToolsOptions>()
expectTypeOf<O1MiniOptions>().toExtend<OpenAIReasoningOptions>()
expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIToolsOptions>()
expectTypeOf<O3ProOptions>().toExtend<OpenAIReasoningOptions>()
expectTypeOf<O3ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
expectTypeOf<O3ProOptions>().toExtend<OpenAIToolsOptions>()
expectTypeOf<O3ProOptions>().not.toExtend<OpenAIStreamingOptions>()
expectTypeOf<ComputerUseOptions>().toExtend<
OpenAIReasoningOptionsWithConcise
>()
expectTypeOf<ComputerUseOptions>().not.toExtend<OpenAIStreamingOptions>()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f "text-provider-options.ts" --path "*ai-openai*"

Repository: TanStack/ai

Length of output: 288


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/text/text-provider-options.ts | head -100

Repository: TanStack/ai

Length of output: 6269


🏁 Script executed:

rg "OpenAIReasoningOptions" packages/typescript/ai-openai/src/text/text-provider-options.ts -A 5 -B 2

Repository: TanStack/ai

Length of output: 1096


🏁 Script executed:

cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '147,153p'

Repository: TanStack/ai

Length of output: 531


🏁 Script executed:

cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '140,170p'

Repository: TanStack/ai

Length of output: 1928


🏁 Script executed:

cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '172,200p'

Repository: TanStack/ai

Length of output: 2119


🏁 Script executed:

rg "export type OpenAIReasoningOptions" packages/typescript/ai-openai/src/text/text-provider-options.ts -A 15

Repository: TanStack/ai

Length of output: 1828


🏁 Script executed:

rg "reasoning\?" packages/typescript/ai-openai/src/text/text-provider-options.ts | head -20

Repository: TanStack/ai

Length of output: 70


🏁 Script executed:

rg "GPT54Options|O1MiniOptions|ComputerUseOptions" packages/typescript/ai-openai/src -B 2 -A 3 | head -50

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

fd -t f "model-meta" --path "*ai-openai*" | head -5

Repository: TanStack/ai

Length of output: 288


🏁 Script executed:

fd -t f model-meta packages/typescript/ai-openai

Repository: TanStack/ai

Length of output: 157


🏁 Script executed:

rg "type GPT54Options|type O1MiniOptions|type ComputerUseOptions" packages/typescript/ai-openai -B 5 -A 20

Repository: TanStack/ai

Length of output: 4386


🏁 Script executed:

rg "OpenAIChatModelProviderOptionsByName" packages/typescript/ai-openai/src -l

Repository: TanStack/ai

Length of output: 196


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/model-meta.ts | head -150

Repository: TanStack/ai

Length of output: 3539


🏁 Script executed:

rg "export type OpenAIChatModelProviderOptionsByName" packages/typescript/ai-openai/src/model-meta.ts -A 30

Repository: TanStack/ai

Length of output: 1215


🏁 Script executed:

rg "type TextProviderOptionsForEntry" packages/typescript/ai-openai/src/models/text.ts -A 20

Repository: TanStack/ai

Length of output: 746


🏁 Script executed:

cat packages/typescript/ai-openai/src/models/text.ts | head -200

Repository: TanStack/ai

Length of output: 5615


🏁 Script executed:

rg "'gpt-5.3-chat-latest'|'o1-mini'|'computer-use-preview'" packages/typescript/ai-openai/src/models/text.ts -A 12

Repository: TanStack/ai

Length of output: 2759


🏁 Script executed:

rg "type OpenAIReasoningOptionsWithConcise|reasoningConcise" packages/typescript/ai-openai/src/models/text.ts -B 5 -A 5

Repository: TanStack/ai

Length of output: 918


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/models/text.ts | sed -n '70,95p'

Repository: TanStack/ai

Length of output: 799


🏁 Script executed:

python3 << 'EOF'
# Let's verify TypeScript's structural assignment behavior with optional properties
import json

test_code = """
// Test 1: Empty object extends type with optional property
type OptionalProp = { optional?: string }
type EmptyObj = {}
type Test1 = EmptyObj extends OptionalProp ? true : false
// Test1 should be: true (empty object structurally extends optional property type)

// Test 2: Object without the key
type NoKey = { other: string }
type Test2 = NoKey extends OptionalProp ? true : false
// Test2 should be: true (object without key still extends optional property type)

// Test 3: Check presence of key
type NoKeyHasOptional = Extract<keyof NoKey, 'optional'>
// NoKeyHasOptional should be: never

type WithKeyHasOptional = Extract<keyof OptionalProp, 'optional'>
// WithKeyHasOptional should be: 'optional'
"""

print("TypeScript Structural Assignment Rules:")
print("========================================")
print("When a type only has optional properties:")
print("- {} extends { optional?: T } → TRUE (empty object satisfies optional-only interface)")
print("- { other: X } extends { optional?: T } → TRUE (object without the optional key still satisfies)")
print()
print("Therefore:")
print("- Models WITHOUT reasoning feature → .toExtend<OpenAIReasoningOptions>() will PASS (incorrect)")
print("- Models WITHOUT reasoning feature → .not.toExtend<OpenAIReasoningOptions>() will FAIL (wrong result)")
print()
print("Using Extract<keyof T, 'reasoning'> is more precise:")
print("- Models WITH reasoning → Extract<..., 'reasoning'> = 'reasoning'")
print("- Models WITHOUT reasoning → Extract<..., 'reasoning'> = never")
EOF

Repository: TanStack/ai

Length of output: 752


🏁 Script executed:

cat -n packages/typescript/ai-openai/tests/model-meta.test.ts | sed -n '155,200p'

Repository: TanStack/ai

Length of output: 3148


Replace toExtend<OpenAIReasoningOptions>() with key-presence assertions using Extract<keyof T, 'reasoning'>.

OpenAIReasoningOptions only adds an optional reasoning property, so structurally any provider-options object type (including those without the reasoning feature) extends it. This makes the positive assertions pass incorrectly and the negative assertions fail when they should pass. Use the pattern from lines 147–153: extract the key and assert toEqualTypeOf<'reasoning'>() or toEqualTypeOf<never>().

Proposed fix
+    type GPT54HasReasoning = Extract<keyof GPT54Options, 'reasoning'>
+    type GPT53ChatHasReasoning = Extract<keyof GPT53ChatOptions, 'reasoning'>
+    type GPT5ProHasReasoning = Extract<keyof GPT5ProOptions, 'reasoning'>
+    type GPT41HasReasoning = Extract<keyof GPT41Options, 'reasoning'>
+    type O1MiniHasReasoning = Extract<keyof O1MiniOptions, 'reasoning'>
+    type O3ProHasReasoning = Extract<keyof O3ProOptions, 'reasoning'>
+    type ComputerUseHasReasoning = Extract<keyof ComputerUseOptions, 'reasoning'>
+
-    expectTypeOf<GPT54Options>().toExtend<OpenAIReasoningOptions>()
+    expectTypeOf<GPT54HasReasoning>().toEqualTypeOf<'reasoning'>()
     expectTypeOf<GPT54Options>().toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<GPT54Options>().toExtend<OpenAIToolsOptions>()
-    expectTypeOf<GPT53ChatOptions>().not.toExtend<OpenAIReasoningOptions>()
+    expectTypeOf<GPT53ChatHasReasoning>().toEqualTypeOf<never>()
     expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<GPT53ChatOptions>().toExtend<OpenAIToolsOptions>()
-    expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
+    expectTypeOf<GPT5ProHasReasoning>().toEqualTypeOf<never>()
+    expectTypeOf<GPT5ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
-    expectTypeOf<GPT41Options>().not.toExtend<OpenAIReasoningOptions>()
+    expectTypeOf<GPT41HasReasoning>().toEqualTypeOf<never>()
     expectTypeOf<GPT41Options>().toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<GPT41Options>().toExtend<OpenAIToolsOptions>()
     expectTypeOf<GPT4TurboOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<GPT4TurboOptions>().toExtend<OpenAIToolsOptions>()
     expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIStreamingOptions>()
     expectTypeOf<GPT35TurboOptions>().not.toExtend<OpenAIToolsOptions>()
     expectTypeOf<SearchPreviewOptions>().toExtend<
       OpenAIStructuredOutputOptions
     >()
     expectTypeOf<SearchPreviewOptions>().not.toExtend<OpenAIToolsOptions>()
-    expectTypeOf<O1MiniOptions>().toExtend<OpenAIReasoningOptions>()
+    expectTypeOf<O1MiniHasReasoning>().toEqualTypeOf<'reasoning'>()
     expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<O1MiniOptions>().not.toExtend<OpenAIToolsOptions>()
-    expectTypeOf<O3ProOptions>().toExtend<OpenAIReasoningOptions>()
+    expectTypeOf<O3ProHasReasoning>().toEqualTypeOf<'reasoning'>()
     expectTypeOf<O3ProOptions>().toExtend<OpenAIStructuredOutputOptions>()
     expectTypeOf<O3ProOptions>().toExtend<OpenAIToolsOptions>()
     expectTypeOf<O3ProOptions>().not.toExtend<OpenAIStreamingOptions>()
-    expectTypeOf<ComputerUseOptions>().toExtend<
-      OpenAIReasoningOptionsWithConcise
-    >()
+    expectTypeOf<ComputerUseHasReasoning>().toEqualTypeOf<'reasoning'>()
     expectTypeOf<ComputerUseOptions>().not.toExtend<OpenAIStreamingOptions>()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 172 -
200, Replace structural inheritance checks for the optional reasoning property
with key-presence assertions: for each occurrence of
expectTypeOf<...>().toExtend<OpenAIReasoningOptions>() (e.g., GPT54Options,
O1MiniOptions, O3ProOptions, ComputerUseOptions, GPT53ChatOptions,
GPT41Options), instead assert the extracted key presence using Extract<keyof T,
'reasoning'> and compare with toEqualTypeOf<'reasoning'>() when the type should
expose the reasoning key or toEqualTypeOf<never>() when it should not; update
the tests that currently call toExtend<OpenAIReasoningOptions>() to follow the
pattern used on lines ~147–153 so they check Extract<keyof T, 'reasoning'> for
each corresponding option type.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/typescript/ai-openai/tests/model-meta.test.ts (1)

71-78: Make these export checks independent from src/models/shared.ts.

packages/typescript/ai-openai/src/model-meta.ts already defines these arrays by calling supportedIds(...), idsByStatus(...), and snapshotIds(...) directly. Reusing the same helpers here means a bug in packages/typescript/ai-openai/src/models/shared.ts will update the production value and the expected value together, so these assertions won't catch it. Please derive the expectations directly from the raw registries here, or move the helper-specific behavior into dedicated tests for packages/typescript/ai-openai/src/models/shared.ts.

Also applies to: 80-92, 94-100, 307-308

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/tests/model-meta.test.ts` around lines 71 - 78,
The test currently derives expected arrays by calling the same helpers
(supportedIds, idsByStatus, snapshotIds) used to build the exported constants
(OPENAI_CHAT_MODELS, OPENAI_IMAGE_MODELS, etc.), so a bug in
src/models/shared.ts would mask failures; change the assertions in
model-meta.test.ts to compute expectations directly from the raw registries
(e.g., derive the expected chat ids by iterating/filtering TEXT_MODELS to match
the intended criteria, likewise derive image/video/tts/transcription/realtime
expectations from
IMAGE_MODELS/VIDEO_MODELS/TTS_MODELS/TRANSCRIPTION_MODELS/REALTIME_MODELS)
instead of calling supportedIds/idsByStatus/snapshotIds, and move any tests that
validate helper behavior into dedicated tests for the helper functions
(supportedIds, idsByStatus, snapshotIds) so export checks for OPENAI_* constants
validate production values against raw registries only.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 309-311: The test uses toHaveProperty(id) which treats id as a
property path; replace that assertion with an exact-key check using hasOwn() to
match the other snapshot loops: iterate OPENAI_REALTIME_SNAPSHOT_MODELS and
assert OPENAI_REALTIME_MODELS contains id and that REALTIME_MODELS.hasOwn(id) is
false (or expect(hasOwn(REALTIME_MODELS, id)).toBe(false)) so the test checks
for a literal key rather than a property path; update the assertion referencing
OPENAI_REALTIME_SNAPSHOT_MODELS, OPENAI_REALTIME_MODELS, and REALTIME_MODELS
accordingly.

---

Nitpick comments:
In `@packages/typescript/ai-openai/tests/model-meta.test.ts`:
- Around line 71-78: The test currently derives expected arrays by calling the
same helpers (supportedIds, idsByStatus, snapshotIds) used to build the exported
constants (OPENAI_CHAT_MODELS, OPENAI_IMAGE_MODELS, etc.), so a bug in
src/models/shared.ts would mask failures; change the assertions in
model-meta.test.ts to compute expectations directly from the raw registries
(e.g., derive the expected chat ids by iterating/filtering TEXT_MODELS to match
the intended criteria, likewise derive image/video/tts/transcription/realtime
expectations from
IMAGE_MODELS/VIDEO_MODELS/TTS_MODELS/TRANSCRIPTION_MODELS/REALTIME_MODELS)
instead of calling supportedIds/idsByStatus/snapshotIds, and move any tests that
validate helper behavior into dedicated tests for the helper functions
(supportedIds, idsByStatus, snapshotIds) so export checks for OPENAI_* constants
validate production values against raw registries only.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 84a62b61-3de3-48a9-a8b6-89a080895684

📥 Commits

Reviewing files that changed from the base of the PR and between 3233749 and 4982a1d.

📒 Files selected for processing (4)
  • packages/typescript/ai-openai/src/models/text.ts
  • packages/typescript/ai-openai/tests/model-meta.test.ts
  • packages/typescript/ai-openai/tests/transcription-adapter.test.ts
  • packages/typescript/ai-openai/tests/tts-adapter.test.ts
✅ Files skipped from review due to trivial changes (2)
  • packages/typescript/ai-openai/tests/tts-adapter.test.ts
  • packages/typescript/ai-openai/tests/transcription-adapter.test.ts

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant