-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a test for ORC write with more than one stripe #11743
base: branch-25.02
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -91,6 +91,20 @@ def test_write_round_trip(spark_tmp_path, orc_gens, orc_impl): | |
data_path, | ||
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True}) | ||
|
||
@pytest.mark.parametrize('orc_gens', orc_write_gens_list, ids=idfn) | ||
@pytest.mark.parametrize('orc_impl', ["native", "hive"]) | ||
@allow_non_gpu(*non_utc_allow) | ||
def test_write_more_than_one_stripe_round_trip(spark_tmp_path, orc_gens, orc_impl): | ||
gen_list = [('_c' + str(i), gen) for i, gen in enumerate(orc_gens)] | ||
data_path = spark_tmp_path + '/ORC_DATA' | ||
assert_gpu_and_cpu_writes_are_equal_collect( | ||
# Generate a large enough dataframe to produce more than one stripe(typically 64MB) | ||
# Preferably use only one partition to avoid splitting the data | ||
lambda spark, path: gen_df(spark, gen_list, 12800, num_slices=1).write.orc(path), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Question: Where does the 12800 number come from? Do we know it will be greater than 64m (orc stripe size) for all the datagens you tested? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This number comes from my experiment.( There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general CUDF will split the data by rows and by size. In parquet the split rows in 20,000 but for ORC it is 1,000,000. I am not sure how 12,800 booelan values produces a more than one stripe. I would really like to understand this better because I would expect that to be no where close to the row group count we expect to cause multiple slices. |
||
lambda spark, path: spark.read.orc(path), | ||
data_path, | ||
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True}) | ||
|
||
@pytest.mark.parametrize('orc_gen', orc_write_odd_empty_strings_gens_sample, ids=idfn) | ||
@pytest.mark.parametrize('orc_impl', ["native", "hive"]) | ||
def test_write_round_trip_corner(spark_tmp_path, orc_gen, orc_impl): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the difference is just to generate more data than
test_write_round_trip
, why not just make this case generate more?And another question is if we want to generate more than one stripe for other cases in this file or just this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if the new case will fail I think it makes sense to add it and keep the previous one so we can test the two behaviours at the same time.
Another option might be to add the length to the
@pytest.mark.parametrize
too, but not sure if this will bring some tricky if else inorc_gens
to xfail the failed cases.