Skip to content

Conversation

@yalimu-g
Copy link
Contributor

Add function testWindowFunctionPartitionBy in ReadByFormatIntegrationTestBase to test Window partitionBy function.

@yalimu-g yalimu-g self-assigned this Nov 19, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @yalimu-g, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new integration test to enhance the coverage of the "Read by format" feature in the Spark BigQuery connector. The added test specifically validates the correct functioning of Spark's Window.partitionBy operation when applied to data read from BigQuery, ensuring robust data processing capabilities.

Highlights

  • New Integration Test: A new integration test, testWindowFunctionPartitionBy, has been added to ReadByFormatIntegrationTestBase to specifically test the Window.partitionBy function.
  • Increased Test Coverage: This new test increases the overall test coverage for the "Read by format" functionality within the Spark BigQuery connector, particularly for window functions.
  • Public Dataset Usage: The test utilizes the bigquery-public-data.google_analytics_sample.ga_sessions_20170801 dataset to validate the window function behavior when reading data from BigQuery.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds an integration test for window functions when reading from BigQuery. The test logic is sound, but I've found a few issues. There's a redundant import, the window function uses a non-deterministic ordering which makes the test weak, and more critically, the assertions on the resulting DataFrame's column count and indexing are incorrect, which would cause the test to fail. I've provided suggestions to address these points.

Comment on lines +339 to +345
assertThat(dataset.columns().length).isEqualTo(17);
assertThat(
Arrays.stream(dataset.schema().fields())
.filter(field -> field.name().equals("row_num"))
.count())
.isEqualTo(1);
assertThat(dataset.head().get(16)).isEqualTo(1);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The assertions about the DataFrame's column count and indexing are incorrect. The public table bigquery-public-data.google_analytics_sample.ga_sessions_20170801 contains 15 columns. After adding the row_num column, the DataFrame will have 16 columns in total, not 17. Consequently, the assertion assertThat(dataset.head().get(16)) will fail with an ArrayIndexOutOfBoundsException since valid indices are 0-15. The new row_num column will be at index 15.

Suggested change
assertThat(dataset.columns().length).isEqualTo(17);
assertThat(
Arrays.stream(dataset.schema().fields())
.filter(field -> field.name().equals("row_num"))
.count())
.isEqualTo(1);
assertThat(dataset.head().get(16)).isEqualTo(1);
assertThat(dataset.columns().length).isEqualTo(16);
assertThat(
Arrays.stream(dataset.schema().fields())
.filter(field -> field.name().equals("row_num"))
.count())
.isEqualTo(1);
assertThat(dataset.head().get(15)).isEqualTo(1);

import org.apache.spark.sql.Row;
import org.apache.spark.sql.expressions.Window;
import org.apache.spark.sql.expressions.WindowSpec;
import org.apache.spark.sql.types.*;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This wildcard import org.apache.spark.sql.types.* appears to be redundant. The file already specifically imports the required classes from this package (such as DataType, DataTypes, StructField, and StructType), and the new test case does not introduce any new dependencies from this package. To improve clarity and avoid unnecessary imports, this line should be removed.

Window.partitionBy(
concat(
col("fullVisitorId"), col("visitStartTime"), col("trafficSource.referralPath")))
.orderBy(lit("window_ordering"));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using lit("window_ordering") for ordering within the window function makes the ordering non-deterministic because all rows within a partition will have the same value for the ordering column. This means row_number() will assign numbers arbitrarily to rows within each partition. While the test might currently pass, it is not robust and could be flaky. It also doesn't properly test the ordering capability of the window function. Please consider ordering by a meaningful column to ensure deterministic behavior, for example visitStartTime.

Suggested change
.orderBy(lit("window_ordering"));
.orderBy(col("visitStartTime"));

@yalimu-g
Copy link
Contributor Author

/gcbrun

@yalimu-g
Copy link
Contributor Author

yalimu-g commented Dec 2, 2025

Close this with the new PR #1447 which can reproduce the customer issue with the dataset they used.

@yalimu-g yalimu-g closed this Dec 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant