Skip to content

[ML] Add AnthropicModelHandler for direct Claude inference#37956

Open
bvolpato wants to merge 1 commit intoapache:masterfrom
bvolpato:bvolpato/anthropic-inference
Open

[ML] Add AnthropicModelHandler for direct Claude inference#37956
bvolpato wants to merge 1 commit intoapache:masterfrom
bvolpato:bvolpato/anthropic-inference

Conversation

@bvolpato
Copy link
Copy Markdown
Contributor

@bvolpato bvolpato commented Mar 26, 2026

Adds a ModelHandler for Anthropic Claude models using the Messages API, enabling direct inference in Beam pipelines without requiring Vertex AI or GCP setup.

This follows the same pattern as the existing GeminiModelHandler, which provides direct Google AI Studio access alongside the Vertex AI-based handler. The motivation is the same: users should be able to use Claude models with just an API key, without needing GCP project setup, service accounts, or IAM configuration.

Changes

  • anthropic_inference.py - AnthropicModelHandler extending RemoteModelHandler, with message_from_string and message_from_conversation request functions, and retry logic for 429/5xx errors
  • Added support for System Prompts: Set global pipeline behaviors via system= to configure the model persona or strict instruction sets.
  • Added support for Structured JSON Outputs: Pass output_config= following Anthropic's GA JSON Schema format to force structured formatting out of Claude endpoints (enforces anthropic>=0.86.0).
  • anthropic_inference_test.py - Unit tests covering retry logic, request functions, model handler, output formatting logic, system prompts, and a full pipeline E2E test with picklable fakes.
  • anthropic_inference_it_test.py - Integration tests for generating structured outputs (e.g. Fizz rules) and multi-turn conversations (requires ANTHROPIC_API_KEY).
  • anthropic_tests_requirements.txt - Testing dependency declaration enforcing anthropic>=0.86.0.
  • pytest.ini - Registration of the anthropic_postcommit marker.

CI note

The integration tests are not wired into inferencePostCommitIT or any Dataflow gradle task since the OSS project will not have Anthropic API keys configured. The IT tests exist for contributors to run locally:

ANTHROPIC_API_KEY="..." python -m pytest apache_beam/ml/inference/anthropic_inference_it_test.py -v

Unit tests (mock-based, no API key needed):

python -m pytest apache_beam/ml/inference/anthropic_inference_test.py -v

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new AnthropicModelHandler to Apache Beam's RunInference transform, enabling direct integration with Anthropic Claude models via their Messages API. This allows users to perform batch inference with Claude models using only an API key, bypassing the need for Vertex AI or GCP setup, similar to the existing Gemini integration. The change expands Beam's ML inference capabilities by adding support for another major LLM provider.

Highlights

  • New AnthropicModelHandler: Added a dedicated AnthropicModelHandler to enable direct inference with Anthropic Claude models within Apache Beam pipelines, bypassing the need for Vertex AI or GCP setup.
  • Request Functions: Implemented message_from_string and message_from_conversation functions to handle different input formats for Claude's Messages API, supporting both single-turn prompts and multi-turn conversations.
  • Robust Error Handling: Included retry logic for Anthropic API errors, specifically targeting rate limiting (429) and server-side errors (5xx) to improve pipeline resilience.
  • Comprehensive Testing: Developed extensive unit tests using mocks and integration tests requiring an API key, ensuring the reliability and correctness of the Anthropic integration.
  • Build System Integration: Configured the build system to include Anthropic integration tests in post-commit checks, registering a new Pytest marker and a Gradle task.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

}

// Anthropic RunInference IT tests
task anthropicInferenceTest {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not too sure about this, I've based on the geminiInferenceTest, but is ANTHROPIC_API_KEY is not defined at runtime, it will be a noop.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should define the task but not add it to the inferencePostCommitIT task set yet since the repo doesn't have an anthropic API key secret. The OpenAI handler is in the same boat. At some point I need to see if Apache has resources around for these

@bvolpato bvolpato force-pushed the bvolpato/anthropic-inference branch from 5dc31d8 to e1b72f9 Compare March 26, 2026 03:30
@bvolpato bvolpato marked this pull request as draft March 26, 2026 03:58
@bvolpato bvolpato marked this pull request as ready for review March 26, 2026 13:26
@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers:

R: @jrmccluskey for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Adds a new RemoteModelHandler for Anthropic's Claude models via the
Messages API. Features include:

- AnthropicModelHandler with retry logic for 429/5xx errors
- message_from_string and message_from_conversation request functions
- System prompt support (constructor-level and per-request overrides)
- Structured JSON output via output_config (GA API, requires anthropic>=0.86.0)
- Comprehensive unit tests and integration tests (Fizz counting rule)
- Integration tests gated behind ANTHROPIC_API_KEY env var
@bvolpato bvolpato force-pushed the bvolpato/anthropic-inference branch from e9844cc to 412d534 Compare March 26, 2026 15:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants