Skip to content

feat: Add thinking and reasoning_effort parameter support for GitHub Copilot provider #13691

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

timelfrink
Copy link

  • Add github_copilot case to get_supported_openai_params function
  • Implement get_supported_openai_params method in GithubCopilotConfig
  • Dynamically add thinking and reasoning_effort params for Anthropic models
  • Add comprehensive tests for parameter support validation
  • Ensure case-insensitive model detection for parameter inclusion

Fixes UnsupportedParamsError when using advanced reasoning parameters with Anthropic models through GitHub Copilot proxy.

Title

Add thinking and reasoning_effort parameter support for GitHub Copilot provider

Relevant issues

If you want to use these params dynamically send allowed_openai_params=['thinking'] in your request.
Traceback (most recent call last):
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/llms/anthropic/experimental_pass_through/adapters/handler.py", line 157, in async_anthropic_messages_handler
    completion_response = await litellm.acompletion(**completion_kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/utils.py", line 1586, in wrapper_async
    raise e
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/utils.py", line 1437, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/main.py", line 563, in acompletion
    raise exception_type(
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/main.py", line 536, in acompletion
    init_response = await loop.run_in_executor(None, func_with_context)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/.pyenv/versions/3.11.0/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/utils.py", line 1060, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/main.py", line 3525, in completion
    raise exception_type(
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/main.py", line 1246, in completion
    optional_params = get_optional_params(
                      ^^^^^^^^^^^^^^^^^^^^
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/utils.py", line 3357, in get_optional_params
    _check_valid_arg(
  File "/Users/elfrit01/Documents/Stepstone/copilot-litellm-proxy/.venv/lib/python3.11/site-packages/litellm/utils.py", line 3340, in _check_valid_arg
    raise UnsupportedParamsError(
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: github_copilot does not support parameters: ['thinking'], for model=claude-sonnet-4. To drop these, set `litellm.drop_params=True` or for proxy:

`litellm_settings:
 drop_params: true`

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Added support for thinking in github copilot proxy for Anthropic models.

image

…Copilot provider

- Add github_copilot case to get_supported_openai_params function
- Implement get_supported_openai_params method in GithubCopilotConfig
- Dynamically add thinking and reasoning_effort params for Anthropic models
- Add comprehensive tests for parameter support validation
- Ensure case-insensitive model detection for parameter inclusion

Fixes UnsupportedParamsError when using advanced reasoning parameters
with Anthropic models through GitHub Copilot proxy.
Copy link

vercel bot commented Aug 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Aug 20, 2025 5:59am

@CLAassistant
Copy link

CLAassistant commented Aug 17, 2025

CLA assistant check
All committers have signed the CLA.

… provider

- Add dynamic parameter support for anthropic models through GitHub Copilot
- Include thinking parameter for anthropic model compatibility
- Support reasoning_effort parameter for both anthropic and reasoning models
- Update test coverage for parameter validation logic
- Ensure proper parameter filtering based on model type
Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

base_params = super().get_supported_openai_params(model)

# Add Claude-specific parameters for Anthropic models
if "claude" in model.lower():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't it just the claude 4 family and not all claude models ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed - now only applies to models with extended thinking support (4 family and 3-7), not all models.

)
elif custom_llm_provider == "github_copilot":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not needed.

provider_config_manager on line 44 should already handle this

    if provider_config and request_type == "chat_completion":
        return provider_config.get_supported_openai_params(model=model)

please remove this

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed - removed the redundant check since provider_config_manager already handles this.

…ended thinking support

Only models in the 4 family and 3-7 family support extended thinking features.
Previously all models would incorrectly receive these parameters.

Now uses supports_reasoning() to check model registry for actual capability.
…rams

The provider_config_manager already handles github_copilot provider
through LlmProviders.GITHUB_COPILOT mapping, making the explicit
check unnecessary.
… support

- Fix supports_reasoning() call to use lowercase model names for proper lookup
- Remove custom_llm_provider parameter as model registry entries are provider-agnostic
- Update tests to use full model names with date stamps (required for supports_reasoning)
- Add test coverage for models without extended thinking support
…ithub-copilot-thinking-reasoning-support
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants