Skip to content

Conversation

@hcadioli
Copy link

No description provided.

@google-cla
Copy link

google-cla bot commented Oct 20, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link

Summary of Changes

Hello @hcadioli, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a caching strategy for the McpToolset to optimize the retrieval of MCP tools. By storing the list of tools after the initial fetch, subsequent calls to get_tools will return the cached data, thereby reducing processing overhead and improving performance, particularly in streaming contexts where repeated tool lookups might occur.

Highlights

  • Caching Mechanism: Introduced a caching mechanism within the McpToolset class to store fetched tools, preventing redundant calls to get_tools.
  • New Instance Variable: Added a new instance variable _loaded_mcp_tools of type List[BaseTool] | None to McpToolset to hold the cached list of tools.
  • Optimized Tool Retrieval: Modified the get_tools method to first check if _loaded_mcp_tools is already populated. If so, it returns the cached tools immediately, otherwise it proceeds to fetch and then cache them.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot
Copy link
Collaborator

adk-bot commented Oct 20, 2025

Response from ADK Triaging Agent

Hello @hcadioli, thank you for creating this PR!

Before we can proceed with the review, could you please address the following points from our contribution guidelines:

  • Sign our Contributor License Agreement: It looks like the CLA check is failing. Please sign the Contributor License Agreement so we can accept your contribution.
  • Associate an Issue: For a new feature like this, please create a GitHub issue that describes the purpose and context of this change and link it to this PR.
  • Provide a Testing Plan: Could you please add a "Testing Plan" section to your PR description explaining how you've tested these changes?
  • Add Unit Tests: Please add unit tests for the new caching logic to ensure it works as expected and to prevent future regressions.

You can find more details in our contribution guidelines.

This information will help our reviewers to review your PR more efficiently. Thanks!

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a caching mechanism to McpToolset to avoid refetching MCP tools on subsequent calls to get_tools(). While the goal of improving performance is good, the current implementation has a critical flaw: the cache is not context-aware. This can lead to incorrect tool lists being returned when the readonly_context changes between calls. I've left a detailed comment explaining the issue and suggesting alternative approaches. This correctness issue should be addressed before merging.

Comment on lines +174 to +175
if self._loaded_mcp_tools is not None:
return self._loaded_mcp_tools

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This caching implementation has a critical correctness issue and a potential race condition.

  • Correctness Bug (Not Context-Aware): The list of tools returned by get_tools depends on the readonly_context argument, which influences session headers and tool filtering. This implementation caches the tools from the first call and returns this cached list for all subsequent calls, regardless of their context. This will lead to incorrect behavior if get_tools is called with different contexts.

  • Race Condition: If multiple coroutines call get_tools() concurrently while the cache is empty, they may all bypass the cache check and proceed to fetch tools from the server simultaneously. This negates the performance benefit of caching.

Due to the correctness bug, I recommend rethinking this caching strategy. An incorrect cache can be more harmful than no cache.

Recommendation:
A more robust solution would involve a context-aware cache. For example, you could cache the result of the expensive session.list_tools() call (which depends on headers) and then re-apply the context-specific filtering (_is_tool_selected) on every get_tools call.

If you are certain that a McpToolset instance will only ever be used with a single, unchanging context, please add a prominent comment or docstring to clarify this assumption. Even in that case, the race condition should be fixed using asyncio.Lock to ensure tools are fetched only once.

@ryanaiagent ryanaiagent self-assigned this Oct 21, 2025
@ryanaiagent ryanaiagent added the mcp [Component] Issues about MCP support label Oct 22, 2025
@ryanaiagent
Copy link
Collaborator

ryanaiagent commented Oct 22, 2025

Hi @hcadioli, Thank you for your contribution!
It appears you haven't yet signed the Contributor License Agreement (CLA).
Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

mcp [Component] Issues about MCP support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants