-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
feat: Add thinking and reasoning_effort parameter support for GitHub Copilot provider #13691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Add thinking and reasoning_effort parameter support for GitHub Copilot provider #13691
Conversation
…Copilot provider - Add github_copilot case to get_supported_openai_params function - Implement get_supported_openai_params method in GithubCopilotConfig - Dynamically add thinking and reasoning_effort params for Anthropic models - Add comprehensive tests for parameter support validation - Ensure case-insensitive model detection for parameter inclusion Fixes UnsupportedParamsError when using advanced reasoning parameters with Anthropic models through GitHub Copilot proxy.
The latest updates on your projects. Learn more about Vercel for GitHub.
|
… provider - Add dynamic parameter support for anthropic models through GitHub Copilot - Include thinking parameter for anthropic model compatibility - Support reasoning_effort parameter for both anthropic and reasoning models - Update test coverage for parameter validation logic - Ensure proper parameter filtering based on model type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
base_params = super().get_supported_openai_params(model) | ||
|
||
# Add Claude-specific parameters for Anthropic models | ||
if "claude" in model.lower(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't it just the claude 4 family and not all claude models ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed - now only applies to models with extended thinking support (4 family and 3-7), not all models.
) | ||
elif custom_llm_provider == "github_copilot": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not needed.
provider_config_manager on line 44 should already handle this
if provider_config and request_type == "chat_completion":
return provider_config.get_supported_openai_params(model=model)
please remove this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed - removed the redundant check since provider_config_manager already handles this.
…ended thinking support Only models in the 4 family and 3-7 family support extended thinking features. Previously all models would incorrectly receive these parameters. Now uses supports_reasoning() to check model registry for actual capability.
…rams The provider_config_manager already handles github_copilot provider through LlmProviders.GITHUB_COPILOT mapping, making the explicit check unnecessary.
… support - Fix supports_reasoning() call to use lowercase model names for proper lookup - Remove custom_llm_provider parameter as model registry entries are provider-agnostic - Update tests to use full model names with date stamps (required for supports_reasoning) - Add test coverage for models without extended thinking support
…hinking-reasoning-support
…ithub-copilot-thinking-reasoning-support
Fixes UnsupportedParamsError when using advanced reasoning parameters with Anthropic models through GitHub Copilot proxy.
Title
Add thinking and reasoning_effort parameter support for GitHub Copilot provider
Relevant issues
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/
directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit
Type
🆕 New Feature
Changes
Added support for thinking in github copilot proxy for Anthropic models.