Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (10)
✅ Files skipped from review due to trivial changes (9)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant NewApiProvider
participant ConfigPresenter
participant OpenAIDel as OpenAI Delegate
participant AnthropicDel as Anthropic Delegate
participant GeminiDel as Gemini Delegate
participant ModelCapabilities
Client->>NewApiProvider: completions(messages, modelId)
NewApiProvider->>ConfigPresenter: getModelConfig(modelId)
ConfigPresenter-->>NewApiProvider: config { endpointType? }
NewApiProvider->>NewApiProvider: resolveEndpointType(modelId)
alt endpointType == 'openai' or default
NewApiProvider->>OpenAIDel: completions(...)
OpenAIDel->>ModelCapabilities: getThinkingBudgetRange(openai, modelId)
ModelCapabilities-->>OpenAIDel: range
OpenAIDel-->>NewApiProvider: LLMResponse
else endpointType == 'anthropic'
NewApiProvider->>AnthropicDel: completions(...)
AnthropicDel->>ModelCapabilities: supportsReasoningCapability(anthropic, modelId)
ModelCapabilities-->>AnthropicDel: bool
AnthropicDel-->>NewApiProvider: LLMResponse
else endpointType == 'gemini'
NewApiProvider->>GeminiDel: completions(...)
GeminiDel->>ModelCapabilities: getReasoningPortrait(gemini, modelId)
ModelCapabilities-->>GeminiDel: portrait
GeminiDel-->>NewApiProvider: LLMResponse
end
NewApiProvider-->>Client: LLMResponse
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/configPresenter/index.ts`:
- Around line 539-579: The code currently hard-codes the literal string
'new-api' when deciding NewAPI behavior in resolveNewApiCapabilityEndpointType
and resolveCapabilityProviderId; instead look up the provider's apiType for the
given providerId and use that apiType when calling
getModelConfig/getProviderModels/getCustomModels and when checking whether to
run NewAPI logic. Concretely: in resolveCapabilityProviderId, query the provider
object (e.g., this.getProvider(providerId) or equivalent) to get
providerApiType; return providerId early if providerApiType !== 'new-api';
otherwise call resolveNewApiCapabilityEndpointType with the resolved
providerApiType (or modify resolveNewApiCapabilityEndpointType to fetch
providerApiType internally) and replace all hard-coded 'new-api' bucket
references in resolveNewApiCapabilityEndpointType with the providerApiType
variable so cloned/custom providers with apiType 'new-api' are handled correctly
while preserving fallback behavior and the call to
resolveNewApiCapabilityProviderId.
In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts`:
- Around line 617-625: In the 'gemini' streaming branch, normalize the messages
before delegating to Gemini by converting the raw messages array with
toGeminiMessages() and passing that result into geminiDelegate.coreStream;
update the case 'gemini' handling (where coreStream(...) is called) to call
toGeminiMessages(messages) (same normalization used by completions() and
summaryTitles()) so unsupported roles/content parts are filtered out on the
streaming path as well.
- Around line 249-260: The inferModelType function currently promotes any model
that lists 'image-generation' in supported to ModelType.ImageGeneration; change
this so a model is classified as ImageGeneration only when its rawModel.type (or
rawModel.id) explicitly indicates an image-only model or when supported includes
'image-generation' and does not include chat-like endpoints (e.g., 'openai');
specifically, update inferModelType to check normalizedRawType for
'image'/'imagegeneration' OR (supported includes 'image-generation' AND
supported does NOT include 'openai' or other non-image endpoint types), and
consider also checking rawModel.id for image-specific identifiers before
returning ModelType.ImageGeneration to avoid promoting dual-mode models to
image-only.
In `@src/renderer/settings/components/ProviderApiConfig.vue`:
- Line 174: The anchor that opens external links in ProviderApiConfig.vue (the
<a> with :href="providerApiKeyUrl" and target="_blank" displaying {{
provider.name }}) should include rel="noopener noreferrer" to prevent
reverse-tabnabbing; update that <a> element to add the rel attribute while
keeping the existing :href and target bindings.
In `@src/renderer/src/components/settings/ModelConfigDialog.vue`:
- Around line 791-800: The computed availableEndpointTypes currently returns the
full NEW_API_ENDPOINT_TYPES when supportedEndpointTypes is absent, which exposes
endpoints for persisted models that only have a single persisted default; change
availableEndpointTypes to: if providerModelMeta.value?.supportedEndpointTypes is
a non-empty array use the filtered isNewApiEndpointType list, else if
providerModelMeta.value?.endpointType exists return
[providerModelMeta.value.endpointType] (validated with isNewApiEndpointType),
and only fall back to [...NEW_API_ENDPOINT_TYPES] when neither
supportedEndpointTypes nor providerModelMeta.endpointType exist (i.e., truly
new/custom models). This touches availableEndpointTypes, providerModelMeta,
supportedEndpointTypes, endpointType, isNewApiEndpointType, and
NEW_API_ENDPOINT_TYPES.
In `@src/renderer/src/i18n/da-DK/settings.json`:
- Around line 402-413: Translate the English text for the endpointType object
into Danish: update the values for the keys endpointType.label,
endpointType.description, endpointType.placeholder, endpointType.required and
each option under endpointType.options (openai, openai-response, anthropic,
gemini, image-generation) so the UI shows Danish strings instead of English;
keep the key names intact and only replace the English string values with
appropriate Danish translations.
In `@src/renderer/src/i18n/fa-IR/settings.json`:
- Around line 456-467: The Persian locale file contains English text for the new
"endpointType" block; translate every string under the endpointType object
(keys: label, description, placeholder, required and each options value:
"openai", "openai-response", "anthropic", "gemini", "image-generation") into
Persian so the fa-IR settings.json is fully localized; update those values in
the endpointType object (e.g., endpointType.label, endpointType.description,
endpointType.placeholder, endpointType.required, and endpointType.options.*)
with the proper Persian translations.
In `@src/renderer/src/i18n/fr-FR/settings.json`:
- Around line 456-467: The "endpointType" locale block is still in English;
translate the values for label, description, placeholder, required and each
option under options ("openai", "openai-response", "anthropic", "gemini",
"image-generation") into French so the fr-FR settings.json is fully localized;
update the strings for the "endpointType" object (label, description,
placeholder, required, and options keys) with appropriate French text while
keeping keys unchanged.
In `@src/renderer/src/i18n/he-IL/settings.json`:
- Around line 456-467: The strings under the endpointType object (keys: label,
description, placeholder, required, and options including openai,
openai-response, anthropic, gemini, image-generation) are still in English in
the he-IL file; replace each English string with the correct Hebrew translations
to avoid mixed-language UI for Hebrew users—update "endpointType.label",
"endpointType.description", "endpointType.placeholder", "endpointType.required"
and each "endpointType.options.*" value with their Hebrew equivalents while
preserving the key names and JSON structure.
In `@src/renderer/src/i18n/ja-JP/settings.json`:
- Around line 456-467: Translate the English strings under the endpointType
object into Japanese: update endpointType.label, endpointType.description,
endpointType.placeholder, endpointType.required and each endpointType.options
key (openai, openai-response, anthropic, gemini, image-generation) with
appropriate Japanese text; keep the same JSON keys and punctuation, preserve
Unicode/encoding, and ensure the resulting values read naturally in Japanese for
the settings UI.
In `@src/renderer/src/i18n/ko-KR/settings.json`:
- Around line 456-467: The endpointType localization block is still in English;
update the "endpointType" object keys (label, description, placeholder,
required) and each options entry ("openai", "openai-response", "anthropic",
"gemini", "image-generation") with Korean translations so the ko-KR
settings.json is fully localized; keep keys unchanged but replace the English
strings with appropriate Korean equivalents for label, description, placeholder,
required, and each option value.
In `@src/renderer/src/i18n/pt-BR/settings.json`:
- Around line 456-467: The endpointType translation entries are still in
English; update the "endpointType" object (keys: "label", "description",
"placeholder", "required", and each "options" value: "openai",
"openai-response", "anthropic", "gemini", "image-generation") to Portuguese so
the pt-BR locale is consistent; replace the English strings with appropriate
Brazilian Portuguese equivalents for the label, description, placeholder,
required message, and each option display name while preserving the JSON keys
and structure.
In `@src/renderer/src/i18n/ru-RU/settings.json`:
- Around line 456-467: The ru-RU locale's endpointType block is still English;
update the keys under "endpointType" (label, description, placeholder, required)
and each "options" entry ("openai", "openai-response", "anthropic", "gemini",
"image-generation") with Russian translations so the settings UI is fully
localized; locate the "endpointType" object in the ru-RU settings.json and
replace the English strings with appropriate Russian text for the label,
description, placeholder, required message and all option names.
In `@src/renderer/src/i18n/zh-HK/settings.json`:
- Around line 456-467: The endpointType translation block is still in English;
update the object keys under endpointType (label, description, placeholder,
required, and each options key: openai, openai-response, anthropic, gemini,
image-generation) to Traditional Chinese (zh-HK) so the UI is fully
localized—replace "Endpoint Type", "Select which upstream protocol New API
should use for this model.", "Select endpoint type", "Endpoint type is
required", and the option values "OpenAI Chat", "OpenAI Responses", "Anthropic
Messages", "Gemini Native", "Image Generation" with appropriate zh-HK
translations while preserving the same JSON keys and structure.
In `@src/renderer/src/i18n/zh-TW/settings.json`:
- Around line 456-467: The endpointType block contains English strings; update
the Traditional Chinese (zh-TW) translations for "endpointType.label",
"endpointType.description", "endpointType.placeholder", "endpointType.required"
and each option key ("openai", "openai-response", "anthropic", "gemini",
"image-generation") so the UI is fully localized—e.g., replace label with
"端點類型", description with "為此模型選擇上游通訊協定(New API)", placeholder with "選擇端點類型",
required with "需選擇端點類型", and option values with appropriate zh-TW equivalents
such as "OpenAI 聊天", "OpenAI 回應", "Anthropic 訊息", "Gemini 原生", "影像生成". Ensure
you update the strings under the endpointType object only.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3e8d8e59-9b45-4f0e-ab84-d61f59c007db
⛔ Files ignored due to path filters (1)
src/renderer/src/assets/llm-icons/newapi.svgis excluded by!**/*.svg
📒 Files selected for processing (35)
src/main/presenter/configPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/providerModelHelper.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/managers/modelManager.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.tssrc/main/presenter/llmProviderPresenter/providers/newApiProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/renderer/settings/components/ProviderApiConfig.vuesrc/renderer/src/components/chat/ChatStatusBar.vuesrc/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/i18n/da-DK/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/src/i18n/he-IL/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/pages/NewThreadPage.vuesrc/renderer/src/stores/modelStore.tssrc/shared/model.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/types/presenters/llmprovider.presenter.d.tstest/main/presenter/llmProviderPresenter/newApiProvider.test.tstest/renderer/components/ChatStatusBar.test.tstest/renderer/components/ModelConfigDialog.test.ts
| private resolveNewApiCapabilityEndpointType(modelId: string): NewApiEndpointType { | ||
| const modelConfig = this.getModelConfig(modelId, 'new-api') | ||
| if (isNewApiEndpointType(modelConfig.endpointType)) { | ||
| return modelConfig.endpointType | ||
| } | ||
|
|
||
| const storedModel = | ||
| this.getProviderModels('new-api').find((model) => model.id === modelId) ?? | ||
| this.getCustomModels('new-api').find((model) => model.id === modelId) | ||
|
|
||
| if (storedModel) { | ||
| if (isNewApiEndpointType(storedModel.endpointType)) { | ||
| return storedModel.endpointType | ||
| } | ||
|
|
||
| const supportedEndpointTypes = | ||
| storedModel.supportedEndpointTypes?.filter(isNewApiEndpointType) ?? [] | ||
| if ( | ||
| storedModel.type === ModelType.ImageGeneration && | ||
| supportedEndpointTypes.includes('image-generation') | ||
| ) { | ||
| return 'image-generation' | ||
| } | ||
| if (supportedEndpointTypes.length > 0) { | ||
| return supportedEndpointTypes[0] | ||
| } | ||
| if (storedModel.type === ModelType.ImageGeneration) { | ||
| return 'image-generation' | ||
| } | ||
| } | ||
|
|
||
| return 'openai' | ||
| } | ||
|
|
||
| private resolveCapabilityProviderId(providerId: string, modelId: string): string { | ||
| if (providerId.trim().toLowerCase() !== 'new-api') { | ||
| return providerId | ||
| } | ||
|
|
||
| return resolveNewApiCapabilityProviderId(this.resolveNewApiCapabilityEndpointType(modelId)) | ||
| } |
There was a problem hiding this comment.
Avoid hard-coding 'new-api' in capability resolution.
These helpers only activate for a literal provider id of 'new-api' and they also read config/model metadata from the 'new-api' bucket. The renderer already treats any provider whose apiType is 'new-api' as a NewAPI provider, so cloned/custom NewAPI providers will fall back to the raw custom id here and resolve capabilities from the wrong config store. Reasoning/verbosity support will be wrong for those providers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/presenter/configPresenter/index.ts` around lines 539 - 579, The code
currently hard-codes the literal string 'new-api' when deciding NewAPI behavior
in resolveNewApiCapabilityEndpointType and resolveCapabilityProviderId; instead
look up the provider's apiType for the given providerId and use that apiType
when calling getModelConfig/getProviderModels/getCustomModels and when checking
whether to run NewAPI logic. Concretely: in resolveCapabilityProviderId, query
the provider object (e.g., this.getProvider(providerId) or equivalent) to get
providerApiType; return providerId early if providerApiType !== 'new-api';
otherwise call resolveNewApiCapabilityEndpointType with the resolved
providerApiType (or modify resolveNewApiCapabilityEndpointType to fetch
providerApiType internally) and replace all hard-coded 'new-api' bucket
references in resolveNewApiCapabilityEndpointType with the providerApiType
variable so cloned/custom providers with apiType 'new-api' are handled correctly
while preserving fallback behavior and the call to
resolveNewApiCapabilityProviderId.
| private inferModelType(rawModel: NewApiModelRecord, supported: NewApiEndpointType[]) { | ||
| const normalizedRawType = | ||
| typeof rawModel.type === 'string' ? rawModel.type.trim().toLowerCase() : '' | ||
| const normalizedModelId = typeof rawModel.id === 'string' ? rawModel.id.toLowerCase() : '' | ||
|
|
||
| if ( | ||
| normalizedRawType === 'imagegeneration' || | ||
| normalizedRawType === 'image-generation' || | ||
| normalizedRawType === 'image' || | ||
| supported.includes('image-generation') | ||
| ) { | ||
| return ModelType.ImageGeneration |
There was a problem hiding this comment.
Don’t treat mixed chat/image endpoint support as an image-only model.
A model can support both 'openai' and 'image-generation' without being an image-generation model by default. Promoting every model that advertises 'image-generation' to ModelType.ImageGeneration will make dual-mode chat models default to the image route, which breaks normal chat routing and downstream filtering.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts` around
lines 249 - 260, The inferModelType function currently promotes any model that
lists 'image-generation' in supported to ModelType.ImageGeneration; change this
so a model is classified as ImageGeneration only when its rawModel.type (or
rawModel.id) explicitly indicates an image-only model or when supported includes
'image-generation' and does not include chat-like endpoints (e.g., 'openai');
specifically, update inferModelType to check normalizedRawType for
'image'/'imagegeneration' OR (supported includes 'image-generation' AND
supported does NOT include 'openai' or other non-image endpoint types), and
consider also checking rawModel.id for image-specific identifiers before
returning ModelType.ImageGeneration to avoid promoting dual-mode models to
image-only.
| case 'gemini': | ||
| yield* this.geminiDelegate.coreStream( | ||
| messages, | ||
| modelId, | ||
| modelConfig, | ||
| temperature, | ||
| maxTokens, | ||
| tools | ||
| ) |
There was a problem hiding this comment.
Normalize Gemini messages on the streaming path too.
The non-streaming Gemini branches already call toGeminiMessages(), but coreStream() forwards the raw messages array. That makes streaming behavior diverge from completions()/summaryTitles() and can pass unsupported roles or content parts into GeminiProvider.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/presenter/llmProviderPresenter/providers/newApiProvider.ts` around
lines 617 - 625, In the 'gemini' streaming branch, normalize the messages before
delegating to Gemini by converting the raw messages array with
toGeminiMessages() and passing that result into geminiDelegate.coreStream;
update the case 'gemini' handling (where coreStream(...) is called) to call
toGeminiMessages(messages) (same normalization used by completions() and
summaryTitles()) so unsupported roles/content parts are filtered out on the
streaming path as well.
| <a :href="providerWebsites?.apiKey" target="_blank" class="text-primary">{{ | ||
| provider.name | ||
| }}</a> | ||
| <a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a> |
There was a problem hiding this comment.
Harden external link opened with target="_blank"
Add rel="noopener noreferrer" to prevent reverse-tabnabbing when opening the API key page.
🔒 Suggested fix
-<a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a>
+<a :href="providerApiKeyUrl" target="_blank" rel="noopener noreferrer" class="text-primary">
+ {{ provider.name }}
+</a>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <a :href="providerApiKeyUrl" target="_blank" class="text-primary">{{ provider.name }}</a> | |
| <a :href="providerApiKeyUrl" target="_blank" rel="noopener noreferrer" class="text-primary"> | |
| {{ provider.name }} | |
| </a> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/settings/components/ProviderApiConfig.vue` at line 174, The
anchor that opens external links in ProviderApiConfig.vue (the <a> with
:href="providerApiKeyUrl" and target="_blank" displaying {{ provider.name }})
should include rel="noopener noreferrer" to prevent reverse-tabnabbing; update
that <a> element to add the rel attribute while keeping the existing :href and
target bindings.
| const availableEndpointTypes = computed<NewApiEndpointType[]>(() => { | ||
| const supportedEndpointTypes = providerModelMeta.value?.supportedEndpointTypes | ||
| if (Array.isArray(supportedEndpointTypes) && supportedEndpointTypes.length > 0) { | ||
| const normalizedEndpointTypes = supportedEndpointTypes.filter(isNewApiEndpointType) | ||
| if (normalizedEndpointTypes.length > 0) { | ||
| return normalizedEndpointTypes | ||
| } | ||
| } | ||
|
|
||
| return [...NEW_API_ENDPOINT_TYPES] |
There was a problem hiding this comment.
Don’t expose every endpoint when the model only has a persisted default.
If supportedEndpointTypes is absent, this offers the full NEW_API_ENDPOINT_TYPES list even for existing provider models. Older/newly-fetched entries can still carry a single endpointType, so the dialog will let users save delegates the model never advertised and NewApiProvider may route those requests to an incompatible backend. Fall back to [providerModelMeta.endpointType] when that exists, and reserve the full list for brand-new custom models.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/src/components/settings/ModelConfigDialog.vue` around lines 791
- 800, The computed availableEndpointTypes currently returns the full
NEW_API_ENDPOINT_TYPES when supportedEndpointTypes is absent, which exposes
endpoints for persisted models that only have a single persisted default; change
availableEndpointTypes to: if providerModelMeta.value?.supportedEndpointTypes is
a non-empty array use the filtered isNewApiEndpointType list, else if
providerModelMeta.value?.endpointType exists return
[providerModelMeta.value.endpointType] (validated with isNewApiEndpointType),
and only fall back to [...NEW_API_ENDPOINT_TYPES] when neither
supportedEndpointTypes nor providerModelMeta.endpointType exist (i.e., truly
new/custom models). This touches availableEndpointTypes, providerModelMeta,
supportedEndpointTypes, endpointType, isNewApiEndpointType, and
NEW_API_ENDPOINT_TYPES.
| "endpointType": { | ||
| "label": "Endpoint Type", | ||
| "description": "Select which upstream protocol New API should use for this model.", | ||
| "placeholder": "Select endpoint type", | ||
| "required": "Endpoint type is required", | ||
| "options": { | ||
| "openai": "OpenAI Chat", | ||
| "openai-response": "OpenAI Responses", | ||
| "anthropic": "Anthropic Messages", | ||
| "gemini": "Gemini Native", | ||
| "image-generation": "Image Generation" | ||
| } |
There was a problem hiding this comment.
Korean locale block is not localized yet.
The new endpointType labels/descriptions are English, so users on Korean locale will see mixed UI language.
💡 Suggested localized replacement
"endpointType": {
- "label": "Endpoint Type",
- "description": "Select which upstream protocol New API should use for this model.",
- "placeholder": "Select endpoint type",
- "required": "Endpoint type is required",
+ "label": "엔드포인트 유형",
+ "description": "이 모델에 대해 New API가 사용할 업스트림 프로토콜을 선택하세요.",
+ "placeholder": "엔드포인트 유형 선택",
+ "required": "엔드포인트 유형은 필수입니다",
"options": {
- "openai": "OpenAI Chat",
+ "openai": "OpenAI 채팅",
"openai-response": "OpenAI Responses",
"anthropic": "Anthropic Messages",
- "gemini": "Gemini Native",
- "image-generation": "Image Generation"
+ "gemini": "Gemini 네이티브",
+ "image-generation": "이미지 생성"
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "endpointType": { | |
| "label": "Endpoint Type", | |
| "description": "Select which upstream protocol New API should use for this model.", | |
| "placeholder": "Select endpoint type", | |
| "required": "Endpoint type is required", | |
| "options": { | |
| "openai": "OpenAI Chat", | |
| "openai-response": "OpenAI Responses", | |
| "anthropic": "Anthropic Messages", | |
| "gemini": "Gemini Native", | |
| "image-generation": "Image Generation" | |
| } | |
| "endpointType": { | |
| "label": "엔드포인트 유형", | |
| "description": "이 모델에 대해 New API가 사용할 업스트림 프로토콜을 선택하세요.", | |
| "placeholder": "엔드포인트 유형 선택", | |
| "required": "엔드포인트 유형은 필수입니다", | |
| "options": { | |
| "openai": "OpenAI 채팅", | |
| "openai-response": "OpenAI Responses", | |
| "anthropic": "Anthropic Messages", | |
| "gemini": "Gemini 네이티브", | |
| "image-generation": "이미지 생성" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/src/i18n/ko-KR/settings.json` around lines 456 - 467, The
endpointType localization block is still in English; update the "endpointType"
object keys (label, description, placeholder, required) and each options entry
("openai", "openai-response", "anthropic", "gemini", "image-generation") with
Korean translations so the ko-KR settings.json is fully localized; keep keys
unchanged but replace the English strings with appropriate Korean equivalents
for label, description, placeholder, required, and each option value.
| "endpointType": { | ||
| "label": "Endpoint Type", | ||
| "description": "Select which upstream protocol New API should use for this model.", | ||
| "placeholder": "Select endpoint type", | ||
| "required": "Endpoint type is required", | ||
| "options": { | ||
| "openai": "OpenAI Chat", | ||
| "openai-response": "OpenAI Responses", | ||
| "anthropic": "Anthropic Messages", | ||
| "gemini": "Gemini Native", | ||
| "image-generation": "Image Generation" | ||
| } |
There was a problem hiding this comment.
Translate endpointType copy to pt-BR to avoid mixed-language UI.
Line 457 through Line 466 are English-only in the Portuguese locale file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/src/i18n/pt-BR/settings.json` around lines 456 - 467, The
endpointType translation entries are still in English; update the "endpointType"
object (keys: "label", "description", "placeholder", "required", and each
"options" value: "openai", "openai-response", "anthropic", "gemini",
"image-generation") to Portuguese so the pt-BR locale is consistent; replace the
English strings with appropriate Brazilian Portuguese equivalents for the label,
description, placeholder, required message, and each option display name while
preserving the JSON keys and structure.
| "endpointType": { | ||
| "label": "Endpoint Type", | ||
| "description": "Select which upstream protocol New API should use for this model.", | ||
| "placeholder": "Select endpoint type", | ||
| "required": "Endpoint type is required", | ||
| "options": { | ||
| "openai": "OpenAI Chat", | ||
| "openai-response": "OpenAI Responses", | ||
| "anthropic": "Anthropic Messages", | ||
| "gemini": "Gemini Native", | ||
| "image-generation": "Image Generation" | ||
| } |
There was a problem hiding this comment.
Please translate the new endpointType block to zh-HK.
Line 457 through Line 466 are currently English, which will create mixed-language UI in this locale.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/src/i18n/zh-HK/settings.json` around lines 456 - 467, The
endpointType translation block is still in English; update the object keys under
endpointType (label, description, placeholder, required, and each options key:
openai, openai-response, anthropic, gemini, image-generation) to Traditional
Chinese (zh-HK) so the UI is fully localized—replace "Endpoint Type", "Select
which upstream protocol New API should use for this model.", "Select endpoint
type", "Endpoint type is required", and the option values "OpenAI Chat", "OpenAI
Responses", "Anthropic Messages", "Gemini Native", "Image Generation" with
appropriate zh-HK translations while preserving the same JSON keys and
structure.
feat(provider): add NewAPI provider
Summary by CodeRabbit
New Features
Documentation
Tests