-
Notifications
You must be signed in to change notification settings - Fork 220
Add Integration test to increase the coverage on Read by format #1439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…to cover the test cases of window partition by.
Summary of ChangesHello @yalimu-g, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new integration test to enhance the coverage of the "Read by format" feature in the Spark BigQuery connector. The added test specifically validates the correct functioning of Spark's Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds an integration test for window functions when reading from BigQuery. The test logic is sound, but I've found a few issues. There's a redundant import, the window function uses a non-deterministic ordering which makes the test weak, and more critically, the assertions on the resulting DataFrame's column count and indexing are incorrect, which would cause the test to fail. I've provided suggestions to address these points.
| assertThat(dataset.columns().length).isEqualTo(17); | ||
| assertThat( | ||
| Arrays.stream(dataset.schema().fields()) | ||
| .filter(field -> field.name().equals("row_num")) | ||
| .count()) | ||
| .isEqualTo(1); | ||
| assertThat(dataset.head().get(16)).isEqualTo(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertions about the DataFrame's column count and indexing are incorrect. The public table bigquery-public-data.google_analytics_sample.ga_sessions_20170801 contains 15 columns. After adding the row_num column, the DataFrame will have 16 columns in total, not 17. Consequently, the assertion assertThat(dataset.head().get(16)) will fail with an ArrayIndexOutOfBoundsException since valid indices are 0-15. The new row_num column will be at index 15.
| assertThat(dataset.columns().length).isEqualTo(17); | |
| assertThat( | |
| Arrays.stream(dataset.schema().fields()) | |
| .filter(field -> field.name().equals("row_num")) | |
| .count()) | |
| .isEqualTo(1); | |
| assertThat(dataset.head().get(16)).isEqualTo(1); | |
| assertThat(dataset.columns().length).isEqualTo(16); | |
| assertThat( | |
| Arrays.stream(dataset.schema().fields()) | |
| .filter(field -> field.name().equals("row_num")) | |
| .count()) | |
| .isEqualTo(1); | |
| assertThat(dataset.head().get(15)).isEqualTo(1); |
| import org.apache.spark.sql.Row; | ||
| import org.apache.spark.sql.expressions.Window; | ||
| import org.apache.spark.sql.expressions.WindowSpec; | ||
| import org.apache.spark.sql.types.*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wildcard import org.apache.spark.sql.types.* appears to be redundant. The file already specifically imports the required classes from this package (such as DataType, DataTypes, StructField, and StructType), and the new test case does not introduce any new dependencies from this package. To improve clarity and avoid unnecessary imports, this line should be removed.
| Window.partitionBy( | ||
| concat( | ||
| col("fullVisitorId"), col("visitStartTime"), col("trafficSource.referralPath"))) | ||
| .orderBy(lit("window_ordering")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using lit("window_ordering") for ordering within the window function makes the ordering non-deterministic because all rows within a partition will have the same value for the ordering column. This means row_number() will assign numbers arbitrarily to rows within each partition. While the test might currently pass, it is not robust and could be flaky. It also doesn't properly test the ordering capability of the window function. Please consider ordering by a meaningful column to ensure deterministic behavior, for example visitStartTime.
| .orderBy(lit("window_ordering")); | |
| .orderBy(col("visitStartTime")); |
|
/gcbrun |
|
Close this with the new PR #1447 which can reproduce the customer issue with the dataset they used. |
Add function testWindowFunctionPartitionBy in ReadByFormatIntegrationTestBase to test Window partitionBy function.