Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files🚀 New features to boost your workflow:
|
AAraKKe
left a comment
There was a problem hiding this comment.
Thanks looks very good! Just some comments to follow up on.
AAraKKe
left a comment
There was a problem hiding this comment.
Discussed in a meeting and all comments have been applied. Merge when CI passes. Thanks!
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 80fb3d2150
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if not self._programmatic_tool_calling: | ||
| definitions = [{**d, "allowed_callers": ALLOWED_TOOL_CALLERS} for d in definitions] |
There was a problem hiding this comment.
Preserve direct tool invocation when PTC is disabled
AnthropicAgent._get_tool_definitions() currently adds "allowed_callers": ["code_execution_20260120"] when programmatic_tool_calling is False. In Anthropic tool semantics, omitting allowed_callers keeps tools model-invokable (direct), while restricting to code_execution_* opts the tool into programmatic/code-execution callers only. With the default constructor value (False), this effectively disables normal model tool calls in send() and makes tools unavailable in standard messages.create flows.
Useful? React with 👍 / 👎.
| output_tokens=response.usage.output_tokens, | ||
| cache_read_input_tokens=cache_read, | ||
| cache_creation_input_tokens=cache_creation, | ||
| context=ContextUsage(window_size=await self._get_context_window(), used_tokens=used_tokens), |
There was a problem hiding this comment.
Avoid dropping completed replies on context lookup failure
After messages.create() succeeds, send() unconditionally awaits _get_context_window() while building usage metadata. If models.retrieve() fails (network/API error), the method raises before history is updated and before returning the already-generated assistant output. This turns a successful completion into an exception path and can force retries (and duplicate spend) for transient metadata failures.
Useful? React with 👍 / 👎.
What does this PR do?
Introduces the
AnthropicAgentclass, the core agent layer for theddevAI tooling. This includes:AnthropicAgent(ddev/src/ddev/ai/agent/client.py): An async agent that wraps the Anthropic API. It maintains multi-turn conversation history, supports tool filtering via anallowed_toolsallowlist, maps API exceptions to typed error classes, and exposes a cleansend()interface that returns a structuredAgentResponse. Also, the following types are defined in this file:StopReason(StrEnum),ToolCall,TokenUsageandAgentResponse.Exception wrapper (
ddev/src/ddev/ai/agent/exceptions.py): Defines a hierarchy ofAgentErrorsubclasses (AgentConnectionError,AgentRateLimitError,AgentAPIError) that map directly to Anthropic SDK exception types.Unit tests in
ddev/tests/ai/agent/test_client.py.Motivation
This is the third step in building the
ddevAI agent framework. The previous PRs established the tool registry and tool framework; this PR adds the agent layer that drives conversations with the model and dispatches tool calls.Review checklist (to be filled by reviewers)
qa/skip-qalabel if the PR doesn't need to be tested during QA.backport/<branch-name>label to the PR and it will automatically open a backport PR once this one is merged