Skip to content

[Integration] Expose length-aware batching in all ModelHandler subclasses#37945

Open
Eliaaazzz wants to merge 3 commits intoapache:masterfrom
Eliaaazzz:users/elia/issue-37531-smart-bucketing-integration
Open

[Integration] Expose length-aware batching in all ModelHandler subclasses#37945
Eliaaazzz wants to merge 3 commits intoapache:masterfrom
Eliaaazzz:users/elia/issue-37531-smart-bucketing-integration

Conversation

@Eliaaazzz
Copy link
Copy Markdown
Contributor

Summary

Addresses #37531.

This PR completes the smart bucketing integration for Python RunInference by exposing batch_length_fn and batch_bucket_boundaries on all concrete ModelHandler implementations.

The underlying batching support already exists in the base layer. The missing piece was that many user-facing handlers did not surface these options, which made length-aware batching effectively unavailable for a large part of the inference API surface. With this change, users can enable smart bucketing directly from the handler constructor across supported backends.

What Changed

This change adds batch_length_fn and batch_bucket_boundaries to 16 concrete handlers across the following backends:

  • PyTorch
  • HuggingFace
  • scikit-learn
  • TensorFlow
  • ONNX
  • XGBoost
  • TensorRT
  • vLLM
  • Vertex AI
  • Gemini

Implementation details:

  • Handlers that inherit from ModelHandler now pass the new parameters through to super().__init__()
  • Remote handlers that manage batching kwargs directly (GeminiModelHandler and VertexAIModelHandlerJSON) now wire the values into _batching_kwargs

Testing

Added test coverage in base_test.py for both behavior and wiring:

  • an end-to-end RunInferenceLengthAwareBatchingTest that verifies short and long string inputs are bucketed into separate batches under FnApiRunner
  • a HandlerBucketingKwargsForwardingTest that checks each concrete handler forwards batch_length_fn and batch_bucket_boundaries into batch_elements_kwargs()
  • follow-up fixes to keep the forwarding tests hermetic, especially for HuggingFace pipeline validation and Vertex AI endpoint liveness checks

Context

This is the final integration piece for smart bucketing:

Together, these changes make length-aware batching usable through the public Python inference handlers rather than only at the base implementation layer.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

@Eliaaazzz
Copy link
Copy Markdown
Contributor Author

./gemini review

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the RunInference API by making length-aware batching widely accessible. It integrates smart bucketing capabilities into numerous ModelHandler implementations, allowing users to leverage this feature directly from handler constructors across diverse machine learning backends. This change completes a multi-part effort to enable more efficient inference processing by grouping inputs of similar lengths.

Highlights

  • Expanded Length-Aware Batching: Exposed batch_length_fn and batch_bucket_boundaries in 16 concrete ModelHandler implementations across various ML backends (PyTorch, HuggingFace, scikit-learn, TensorFlow, ONNX, XGBoost, TensorRT, vLLM, Vertex AI, Gemini).
  • Enhanced RunInference API: Enabled length-aware batching for Python RunInference across a wider range of inference APIs, making smart bucketing broadly available to users.
  • Implementation Details: Implemented parameter forwarding to super().__init__() for most handlers and direct wiring into _batching_kwargs for remote handlers like Gemini and Vertex AI.
  • Comprehensive Testing: Added new test coverage in base_test.py including an end-to-end length-aware batching test and dedicated tests for verifying bucketing kwargs forwarding across handlers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Eliaaazzz Eliaaazzz changed the title [Integration] Expose length-aware batching in all ModelHandler subcla… [Integration] Expose length-aware batching in all ModelHandler subclasses Mar 25, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces length-aware batching capabilities to various ML inference model handlers across different frameworks (Gemini, HuggingFace, ONNX, PyTorch, Scikit-learn, TensorFlow, TensorRT, vLLM, XGBoost, and Vertex AI). This is achieved by adding batch_length_fn and batch_bucket_boundaries parameters to the constructors of these handlers. New tests have been added to base_test.py to validate this functionality and ensure the new batching parameters are correctly forwarded. A review comment suggests refactoring the newly added HandlerBucketingKwargsForwardingTest class to reduce code repetition and improve maintainability by parameterizing the test methods.

assert_that(results, equal_to(['a:2', 'bb:2', 'cccccc:7', 'ddddddd:7']))


class HandlerBucketingKwargsForwardingTest(unittest.TestCase):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test methods in this class are very repetitive. To improve maintainability and reduce code duplication, consider parameterizing these tests. You could use a library like parameterized or unittest.TestCase.subTest with a loop over a list of handler configurations. Each configuration could specify the handler class, its specific __init__ arguments, and any necessary mocks or setup.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. These cases all exercise the same forwarding assertion with different handler constructors, so I can consolidate them into a single data-driven test using subTest.

@Eliaaazzz Eliaaazzz force-pushed the users/elia/issue-37531-smart-bucketing-integration branch 2 times, most recently from 5267548 to 0f077f6 Compare March 25, 2026 05:04
@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers:

R: @shunping for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@Eliaaazzz Eliaaazzz force-pushed the users/elia/issue-37531-smart-bucketing-integration branch 8 times, most recently from d7f7981 to b2b8c70 Compare March 26, 2026 12:06
…sses

Completes the smart bucketing feature (apache#37531) by exposing
batch_length_fn and batch_bucket_boundaries parameters across all
concrete ModelHandler implementations.

This allows users to enable length-aware batching on supported
inference backends by passing these parameters directly to the handler
constructor.

- adds batch_length_fn / batch_bucket_boundaries to 16 handler classes
- wires Gemini and Vertex AI batching params into _batching_kwargs
- adds end-to-end RunInference coverage for length-aware batching
- adds per-handler forwarding regression tests and fixes them to be
  hermetic
@Eliaaazzz Eliaaazzz force-pushed the users/elia/issue-37531-smart-bucketing-integration branch from b2b8c70 to af299ec Compare March 26, 2026 13:57
…sses

Completes the smart bucketing feature (apache#37531) by exposing
batch_length_fn and batch_bucket_boundaries parameters across all
concrete ModelHandler implementations.

This allows users to enable length-aware batching on supported
inference backends by passing these parameters directly to the handler
constructor.

- adds batch_length_fn / batch_bucket_boundaries to 16 handler classes
- wires Gemini and Vertex AI batching params into _batching_kwargs
- adds end-to-end RunInference coverage for length-aware batching
- adds per-handler forwarding regression tests and fixes them to be
  hermetic
…sses

Completes the smart bucketing feature (apache#37531) by exposing
batch_length_fn and batch_bucket_boundaries parameters across all
concrete ModelHandler implementations.

This allows users to enable length-aware batching on supported
inference backends by passing these parameters directly to the handler
constructor.

- adds batch_length_fn / batch_bucket_boundaries to 16 handler classes
- wires Gemini and Vertex AI batching params into _batching_kwargs
- adds end-to-end RunInference coverage for length-aware batching
- adds per-handler forwarding regression tests and fixes them to be
  hermetic
@Eliaaazzz
Copy link
Copy Markdown
Contributor Author

@shunping Hi could you please have a look? All three failed checks show the same infrastructure error: "The self-hosted runner lost communication with the server." No tests actually failed before the runner disconnected. This appears to be a transient Apache Beam CI infrastructure issue with the self-hosted runner dropping offline during execution, and it is unrelated to the changes in this PR.

@shunping
Copy link
Copy Markdown
Collaborator

@damccorm, could you help with this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants