-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Cache MCP tools to avoid calling multiple times when streaming #3236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @hcadioli, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a caching strategy for the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @hcadioli, thank you for creating this PR! Before we can proceed with the review, could you please address the following points from our contribution guidelines:
You can find more details in our contribution guidelines. This information will help our reviewers to review your PR more efficiently. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a caching mechanism to McpToolset to avoid refetching MCP tools on subsequent calls to get_tools(). While the goal of improving performance is good, the current implementation has a critical flaw: the cache is not context-aware. This can lead to incorrect tool lists being returned when the readonly_context changes between calls. I've left a detailed comment explaining the issue and suggesting alternative approaches. This correctness issue should be addressed before merging.
| if self._loaded_mcp_tools is not None: | ||
| return self._loaded_mcp_tools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This caching implementation has a critical correctness issue and a potential race condition.
-
Correctness Bug (Not Context-Aware): The list of tools returned by
get_toolsdepends on thereadonly_contextargument, which influences session headers and tool filtering. This implementation caches the tools from the first call and returns this cached list for all subsequent calls, regardless of their context. This will lead to incorrect behavior ifget_toolsis called with different contexts. -
Race Condition: If multiple coroutines call
get_tools()concurrently while the cache is empty, they may all bypass the cache check and proceed to fetch tools from the server simultaneously. This negates the performance benefit of caching.
Due to the correctness bug, I recommend rethinking this caching strategy. An incorrect cache can be more harmful than no cache.
Recommendation:
A more robust solution would involve a context-aware cache. For example, you could cache the result of the expensive session.list_tools() call (which depends on headers) and then re-apply the context-specific filtering (_is_tool_selected) on every get_tools call.
If you are certain that a McpToolset instance will only ever be used with a single, unchanging context, please add a prominent comment or docstring to clarify this assumption. Even in that case, the race condition should be fixed using asyncio.Lock to ensure tools are fetched only once.
|
Hi @hcadioli, Thank you for your contribution! |
No description provided.