[FEA] explore alternative hashing for non-shuffle partitioning #11900
Labels
? - Needs Triage
Need team to review and classify
feature request
New feature or request
performance
A performance related task/issue
reliability
Features to improve reliability or bugs that severly impact the reliability of the plugin
Is your feature request related to a problem? Please describe.
I recently spent some time debugging an issue with some tests for hash aggregate that have a really small target batch size. This exposed an interesting realization that the way Spark handles nulls can cause an excess amount of hash collisions. I personally consider this a bug in Spark's hashing code. We cannot fix it in Spark itself because that could result in incompatibility with Spark itself.
But for internal code where we repartition data for joins or hash aggregates, we should look at using an alternative algorithm that does not have these limitations. I am not sure if the CUDF hash implementation is better, but we probably want to try it out and see. Both in terms of performance to compute the hash and in terms of dealing with nulls.
We had to do a lot to make the hash code compatible with Spark, so I suspect that it may also be slowing down some of the computation being done.
The text was updated successfully, but these errors were encountered: