Skip to content

Model Settings: Add Tool Choice ["auto", "required", "none"] #773

@Kurry

Description

@Kurry

Is your feature request related to a problem? Please describe.
When creating an LlmAgent with custom tools in the Google ADK, the agent often hallucinates plausible JSON responses rather than actually invoking the tool code. For example, when I query for real data (e.g., "Get company profile for MSFT"), it sometimes fabricates empty or incorrect JSON instead of executing my FunctionTool-wrapped API calls. Despite using skip_summarization=True and explicit instructions to return only the tool output, actual tool code execution is not always enforced.

Not having strict and configurable tool invocation makes the Google ADK unusable for a large population of developers, especially in production or evaluation settings. For example, I want to run the Agent locally with Ollama. If I want to switch models or experiment with different providers, I would need to run 100+ evaluations per agent version just to double-check that tool calls are being made correctly and consistently—this is expensive, inefficient and error-prone.

Describe the solution you'd like
I would like the ADK to support a model-level configuration system (similar to the OpenAI Agents SDK ModelSettings tool_choice option) where I can explicitly require tool execution for user queries. Specifically:

  • Add a setting like tool_choice with options such as "auto", "required", or "none".
    • "required" would guarantee that the agent always calls a tool and never returns a model-generated (potentially hallucinated) response when tools are available.
    • If tool execution fails, the agent should error gracefully rather than hallucinate output.
  • Allow this and other model settings (temperature, max_tokens, tool call parallelism, etc.) to be passed simply at agent or query time.
  • These options should be part of a first-class ModelSettings dataclass or similar configuration, as in the OpenAI Agents SDK.

Describe alternatives you've considered

  • Adding stricter prompts/instructions to the agent to enforce tool usage (has not reliably worked)
  • Custom middleware or output verification logic to check if output was from a tool or LLM (adds complexity and feels like a workaround)
  • Calling tools directly and avoiding the agent pattern, which loses multi-tool orchestration benefits

Additional context
The OpenAI Agents SDK ModelSettings provides a pythonic and explicit way to control this behavior. Here is a summary:

  • With tool_choice="required", I can ensure queries always trigger tool calls and results are not fabricated by the model.
  • Parallel tool calls and other advanced options are easily configured.
  • See my attached code: in ADK, despite best efforts (after_tool_callback, skip_summarization, strict instructions), the agent continues to hallucinate responses.

Not having robust, configurable tool invocation makes reliable agent evaluation and deployment across multiple models or providers (like running locally with Ollama) impractical. Developers are forced into excessive, manual evaluation just to verify core agent functionality.

Having a native ModelSettings-like configuration with explicit tool_choice enforcement in ADK would resolve these UX and reliability issues and make tool use far more robust for API integration and retrieval-augmented agent scenarios.

Metadata

Metadata

Assignees

Labels

tools[Component] This issue is related to tools

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions