Skip to content

Commit 0a4b6e5

Browse files
authored
Merge branch 'main' into feat/eager-table-preparation
2 parents be0f4e2 + b8e7647 commit 0a4b6e5

File tree

24 files changed

+1893
-58
lines changed

24 files changed

+1893
-58
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "1.27.1"
2+
".": "1.27.2"
33
}

CHANGELOG.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
11
# Changelog
22

3+
## [1.27.2](https://github.com/google/adk-python/compare/v1.27.1...v1.27.2) (2026-03-17)
4+
### Bug Fixes
5+
* Use valid dataplex OAuth scope for BigQueryToolset ([4010716](https://github.com/google/adk-python/commit/4010716470fc83918dc367c5971342ff551401c8))
6+
* Store and retrieve usage_metadata in Vertex AI custom_metadata ([b318eee](https://github.com/google/adk-python/commit/b318eee979b1625d3d23ad98825c88f54016a12f))
37

4-
## [1.27.1](https://github.com/google/adk-python/compare/v1.26.0...v1.27.0) (2026-03-13)
8+
## [1.27.1](https://github.com/google/adk-python/compare/v1.27.0...v1.27.1) (2026-03-13)
59
### Bug Fixes
610
* Rolling back change to fix issue affecting LlmAgent creation due to missing version field ([0e18f81](https://github.com/google/adk-python/commit/0e18f81a5cd0d0392ded653b1a63a236449a2685))
711

contributing/samples/adk_triaging_agent/agent.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,14 @@
2626
import requests
2727

2828
LABEL_TO_OWNER = {
29-
"a2a": "seanzhou1023",
3029
"agent engine": "yeesian",
31-
"auth": "seanzhou1023",
30+
"auth": "xuanyang15",
3231
"bq": "shobsi",
3332
"core": "Jacksunwei",
3433
"documentation": "joefernandez",
3534
"eval": "ankursharmas",
36-
"live": "seanzhou1023",
37-
"mcp": "seanzhou1023",
35+
"live": "wuliang229",
36+
"mcp": "wukath",
3837
"models": "xuanyang15",
3938
"services": "DeanChensj",
4039
"tools": "xuanyang15",
Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Spanner Admin Tools Sample
2+
3+
## Introduction
4+
5+
This sample agent demonstrates the Spanner first-party tools in ADK,
6+
distributed via the `google.adk.tools.spanner` module. These tools include:
7+
8+
1. `list_instances`
9+
10+
Fetches Spanner instance names present in a project.
11+
12+
1. `get_instance`
13+
14+
Fetches details of a given Spanner instance.
15+
16+
1. `create_database`
17+
18+
Creates a Spanner database within a given instance and project.
19+
20+
1. `list_databases`
21+
22+
Fetches Spanner database names present in an instance.
23+
24+
1. `create_instance`
25+
26+
Creates a Spanner instance within a GCP project.
27+
28+
1. `list_instance_configs`
29+
30+
Fetches Spanner instance configurations available for a project.
31+
32+
1. `get_instance_config`
33+
34+
Fetches details of a Spanner instance configuration.
35+
36+
## How to use
37+
38+
Set up environment variables in your `.env` file for using
39+
[Google AI Studio](https://google.github.io/adk-docs/get-started/quickstart/#gemini---google-ai-studio)
40+
or
41+
[Google Cloud Vertex AI](https://google.github.io/adk-docs/get-started/quickstart/#gemini---google-cloud-vertex-ai)
42+
for the LLM service for your agent. For example, for using Google AI Studio you
43+
would set:
44+
45+
* GOOGLE_GENAI_USE_VERTEXAI=FALSE
46+
* GOOGLE_API_KEY={your api key}
47+
48+
### With Application Default Credentials
49+
50+
This mode is useful for quick development when the agent builder is the only
51+
user interacting with the agent. The tools are run with these credentials.
52+
53+
1. Create application default credentials on the machine where the agent would
54+
be running by following https://cloud.google.com/docs/authentication/provide-credentials-adc.
55+
56+
1. Set `CREDENTIALS_TYPE=None` in `agent.py`
57+
58+
1. Run the agent
59+
60+
### With Service Account Keys
61+
62+
This mode is useful for quick development when the agent builder wants to run
63+
the agent with service account credentials. The tools are run with these
64+
credentials.
65+
66+
1. Create service account key by following https://cloud.google.com/iam/docs/service-account-creds#user-managed-keys.
67+
68+
1. Set `CREDENTIALS_TYPE=AuthCredentialTypes.SERVICE_ACCOUNT` in `agent.py`
69+
70+
1. Download the key file and replace `"service_account_key.json"` with the path
71+
72+
1. Run the agent
73+
74+
### With Interactive OAuth
75+
76+
1. Follow
77+
https://developers.google.com/identity/protocols/oauth2#1.-obtain-oauth-2.0-credentials-from-the-dynamic_data.setvar.console_name.
78+
to get your client id and client secret. Be sure to choose "web" as your client
79+
type.
80+
81+
1. Follow https://developers.google.com/workspace/guides/configure-oauth-consent
82+
to add scope "https://www.googleapis.com/auth/spanner.data" and
83+
"https://www.googleapis.com/auth/spanner.admin" as declaration, this is used
84+
for review purpose.
85+
86+
1. Follow
87+
https://developers.google.com/identity/protocols/oauth2/web-server#creatingcred
88+
to add http://localhost/dev-ui/ to "Authorized redirect URIs".
89+
90+
Note: localhost here is just a hostname that you use to access the dev ui,
91+
replace it with the actual hostname you use to access the dev ui.
92+
93+
1. For 1st run, allow popup for localhost in Chrome.
94+
95+
1. Configure your `.env` file to add two more variables before running the
96+
agent:
97+
98+
* OAUTH_CLIENT_ID={your client id}
99+
* OAUTH_CLIENT_SECRET={your client secret}
100+
101+
Note: don't create a separate .env, instead put it to the same .env file that
102+
stores your Vertex AI or Dev ML credentials
103+
104+
1. Set `CREDENTIALS_TYPE=AuthCredentialTypes.OAUTH2` in `agent.py` and run the
105+
agent
106+
107+
## Sample prompts
108+
109+
* Show me all Spanner instances in my project.
110+
* Give me details about the 'my-instance' Spanner instance.
111+
* List all databases in instance 'my-instance'.
112+
* Create a new Spanner database named 'my-db' in instance 'my-instance'.
113+
* List all instance configurations available for my project.
114+
* Get details about 'regional-us-central1' configuration.
115+
* Create a Spanner instance 'new-instance' with 'regional-us-central1' config and name 'new-instance'.
116+
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Copyright 2026 Google LLC
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
from . import agent
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# Copyright 2026 Google LLC
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
import os
16+
17+
from google.adk.agents.llm_agent import LlmAgent
18+
from google.adk.auth.auth_credential import AuthCredentialTypes
19+
from google.adk.tools.spanner.admin_toolset import SpannerAdminToolset
20+
from google.adk.tools.spanner.spanner_credentials import SpannerCredentialsConfig
21+
import google.auth
22+
23+
# Define an appropriate credential type
24+
# Set to None to use the application default credentials (ADC) for a quick
25+
# development.
26+
CREDENTIALS_TYPE = None
27+
28+
if CREDENTIALS_TYPE == AuthCredentialTypes.OAUTH2:
29+
# Initialize the tools to do interactive OAuth
30+
# The environment variables OAUTH_CLIENT_ID and OAUTH_CLIENT_SECRET
31+
# must be set
32+
credentials_config = SpannerCredentialsConfig(
33+
client_id=os.getenv("OAUTH_CLIENT_ID"),
34+
client_secret=os.getenv("OAUTH_CLIENT_SECRET"),
35+
scopes=[
36+
"https://www.googleapis.com/auth/spanner.admin",
37+
"https://www.googleapis.com/auth/spanner.data",
38+
],
39+
)
40+
elif CREDENTIALS_TYPE == AuthCredentialTypes.SERVICE_ACCOUNT:
41+
# Initialize the tools to use the credentials in the service account key.
42+
# If this flow is enabled, make sure to replace the file path with your own
43+
# service account key file
44+
# https://cloud.google.com/iam/docs/service-account-creds#user-managed-keys
45+
creds, _ = google.auth.load_credentials_from_file("service_account_key.json")
46+
credentials_config = SpannerCredentialsConfig(credentials=creds)
47+
else:
48+
# Initialize the tools to use the application default credentials.
49+
# https://cloud.google.com/docs/authentication/provide-credentials-adc
50+
application_default_credentials, _ = google.auth.default()
51+
credentials_config = SpannerCredentialsConfig(
52+
credentials=application_default_credentials
53+
)
54+
55+
spanner_admin_toolset = SpannerAdminToolset(
56+
credentials_config=credentials_config,
57+
)
58+
59+
# The variable name `root_agent` determines what your root agent is for the
60+
# debug CLI
61+
root_agent = LlmAgent(
62+
model="gemini-2.5-flash",
63+
name="spanner_admin_agent",
64+
description=(
65+
"Agent to perform Spanner admin tasks and answer questions about"
66+
" Spanner databases."
67+
),
68+
instruction="""\
69+
You are a Spanner admin agent with access to several Spanner admin tools.
70+
Make use of those tools to answer user's questions and perform admin
71+
tasks like listing instances or databases.
72+
""",
73+
tools=[
74+
# Use tools from Spanner admin toolset.
75+
spanner_admin_toolset,
76+
],
77+
)

src/google/adk/features/_feature_registry.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ class FeatureName(str, Enum):
4545
PUBSUB_TOOLSET = "PUBSUB_TOOLSET"
4646
SKILL_TOOLSET = "SKILL_TOOLSET"
4747
SPANNER_TOOLSET = "SPANNER_TOOLSET"
48+
SPANNER_ADMIN_TOOLSET = "SPANNER_ADMIN_TOOLSET"
4849
SPANNER_TOOL_SETTINGS = "SPANNER_TOOL_SETTINGS"
4950
SPANNER_VECTOR_STORE = "SPANNER_VECTOR_STORE"
5051
TOOL_CONFIG = "TOOL_CONFIG"
@@ -136,6 +137,9 @@ class FeatureConfig:
136137
FeatureName.SKILL_TOOLSET: FeatureConfig(
137138
FeatureStage.EXPERIMENTAL, default_on=True
138139
),
140+
FeatureName.SPANNER_ADMIN_TOOLSET: FeatureConfig(
141+
FeatureStage.EXPERIMENTAL, default_on=True
142+
),
139143
FeatureName.SPANNER_TOOLSET: FeatureConfig(
140144
FeatureStage.EXPERIMENTAL, default_on=True
141145
),

src/google/adk/models/lite_llm.py

Lines changed: 94 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -384,8 +384,42 @@ def _iter_reasoning_texts(reasoning_value: Any) -> Iterable[str]:
384384
yield str(reasoning_value)
385385

386386

387+
def _is_thinking_blocks_format(reasoning_value: Any) -> bool:
388+
"""Returns True if reasoning_value is Anthropic thinking_blocks format.
389+
390+
Anthropic thinking_blocks is a list of dicts, each with 'type', 'thinking',
391+
and 'signature' keys.
392+
"""
393+
if not isinstance(reasoning_value, list) or not reasoning_value:
394+
return False
395+
first = reasoning_value[0]
396+
return isinstance(first, dict) and "signature" in first
397+
398+
387399
def _convert_reasoning_value_to_parts(reasoning_value: Any) -> List[types.Part]:
388-
"""Converts provider reasoning payloads into Gemini thought parts."""
400+
"""Converts provider reasoning payloads into Gemini thought parts.
401+
402+
Handles Anthropic thinking_blocks (list of dicts with type/thinking/signature)
403+
by preserving the signature on each part's thought_signature field. This is
404+
required for Anthropic to maintain thinking across tool call boundaries.
405+
"""
406+
if _is_thinking_blocks_format(reasoning_value):
407+
parts: List[types.Part] = []
408+
for block in reasoning_value:
409+
if not isinstance(block, dict):
410+
continue
411+
block_type = block.get("type", "")
412+
if block_type == "redacted":
413+
continue
414+
thinking_text = block.get("thinking", "")
415+
signature = block.get("signature", "")
416+
if not thinking_text:
417+
continue
418+
part = types.Part(text=thinking_text, thought=True)
419+
if signature:
420+
part.thought_signature = signature.encode("utf-8")
421+
parts.append(part)
422+
return parts
389423
return [
390424
types.Part(text=text, thought=True)
391425
for text in _iter_reasoning_texts(reasoning_value)
@@ -396,12 +430,19 @@ def _convert_reasoning_value_to_parts(reasoning_value: Any) -> List[types.Part]:
396430
def _extract_reasoning_value(message: Message | Delta | None) -> Any:
397431
"""Fetches the reasoning payload from a LiteLLM message.
398432
399-
Checks for both 'reasoning_content' (LiteLLM standard, used by Azure/Foundry,
400-
Ollama via LiteLLM) and 'reasoning' (used by LM Studio, vLLM).
401-
Prioritizes 'reasoning_content' when both are present.
433+
Checks for 'thinking_blocks' (Anthropic structured format with signatures),
434+
'reasoning_content' (LiteLLM standard, used by Azure/Foundry, Ollama via
435+
LiteLLM) and 'reasoning' (used by LM Studio, vLLM).
436+
Prioritizes 'thinking_blocks' when present (Anthropic models), then
437+
'reasoning_content', then 'reasoning'.
402438
"""
403439
if message is None:
404440
return None
441+
# Anthropic models return thinking_blocks with type/thinking/signature fields.
442+
# This must be preserved to maintain thinking across tool call boundaries.
443+
thinking_blocks = message.get("thinking_blocks")
444+
if thinking_blocks is not None:
445+
return thinking_blocks
405446
reasoning_content = message.get("reasoning_content")
406447
if reasoning_content is not None:
407448
return reasoning_content
@@ -835,6 +876,30 @@ async def _content_to_message_param(
835876
else final_content
836877
)
837878

879+
# For Anthropic models, rebuild thinking_blocks with signatures so that
880+
# thinking is preserved across tool call boundaries. Without this,
881+
# Anthropic silently drops thinking after the first turn.
882+
if model and _is_anthropic_model(model) and reasoning_parts:
883+
thinking_blocks = []
884+
for part in reasoning_parts:
885+
if part.text and part.thought_signature:
886+
sig = part.thought_signature
887+
if isinstance(sig, bytes):
888+
sig = sig.decode("utf-8")
889+
thinking_blocks.append({
890+
"type": "thinking",
891+
"thinking": part.text,
892+
"signature": sig,
893+
})
894+
if thinking_blocks:
895+
msg = ChatCompletionAssistantMessage(
896+
role=role,
897+
content=final_content,
898+
tool_calls=tool_calls or None,
899+
)
900+
msg["thinking_blocks"] = thinking_blocks # type: ignore[typeddict-unknown-key]
901+
return msg
902+
838903
reasoning_texts = []
839904
for part in reasoning_parts:
840905
if part.text:
@@ -1943,6 +2008,31 @@ def _build_request_log(req: LlmRequest) -> str:
19432008
"""
19442009

19452010

2011+
def _is_anthropic_model(model_string: str) -> bool:
2012+
"""Check if the model is an Anthropic Claude model accessed via LiteLLM.
2013+
2014+
Detects models using the anthropic/ provider prefix, bedrock/ models that
2015+
contain 'anthropic' or 'claude', and vertex_ai/ models that contain 'claude'.
2016+
2017+
Args:
2018+
model_string: A LiteLLM model string (e.g., "anthropic/claude-4-sonnet",
2019+
"bedrock/anthropic.claude-3-5-sonnet", "vertex_ai/claude-4-sonnet")
2020+
2021+
Returns:
2022+
True if it's an Anthropic Claude model, False otherwise.
2023+
"""
2024+
lower = model_string.lower()
2025+
if lower.startswith("anthropic/"):
2026+
return True
2027+
if lower.startswith("bedrock/"):
2028+
model_part = lower.split("/", 1)[1]
2029+
return "anthropic" in model_part or "claude" in model_part
2030+
if lower.startswith("vertex_ai/"):
2031+
model_part = lower.split("/", 1)[1]
2032+
return "claude" in model_part
2033+
return False
2034+
2035+
19462036
def _is_litellm_vertex_model(model_string: str) -> bool:
19472037
"""Check if the model is a Vertex AI model accessed via LiteLLM.
19482038

0 commit comments

Comments
 (0)