Skip to content

Rocm jaxlib v0.5.0 warpsize #169

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: rocm-jaxlib-v0.5.0
Choose a base branch
from

Conversation

zoranjovanovic-ns
Copy link

No description provided.

@zoranjovanovic-ns zoranjovanovic-ns force-pushed the rocm-jaxlib-v0.5.0-warpsize branch from 971541b to 7d58776 Compare April 12, 2025 07:54
Copy link

@pemeliya pemeliya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the test MultiOutputFusionTest.MultiOutputReduceFusionMajorWithExtraOutput would still fail with this warp size config ?

analysis, /*minor_dim=*/input_shape_.back(), WarpSize());
int64_t num_warps_per_column = WarpSize();
num_threads_ = {num_warps_per_column, WarpSize()};
analysis, /*minor_dim=*/input_shape_.back(), kTileSize);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we need to change as tile size 32 instead of WarpSize(device_info) here? may I ask why?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is temporary, I believe that reduction algorithm needs modifications in order to work with warp_size==64.
Without this some tests fail.

Copy link

@pemeliya pemeliya Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I also did not find a good solution here. This only applies for column-wise reductions.
They work as follows: one block of 1024 threads (32x32) performs column reduction for 1 vertical stripe of N rows and 32 columns. Basically each warp loads and reduces N/32 rows (each having 32 elements) and writes its resulting reduced row to a shared memory. As a result, we have 32 rows of 32 elements written to shared memory.

After that, we do syncthreads and each warp reads 1 vertical column from shared memory and performs warp-level reduction on it. So, finally each warp just writes its 1 reduced element back to global mem. As a result we have Nx32 stripe reduced to 1x32 row.

To make it working for warp_size=64, we could have 16 warps (16*64 = 1024) processing 1 vertical stripe of N rows and 64 columns. But each warp shall then process N/16 rows and perform 4 writes to shared memory (instead of 1). As a result. we would then have 1 large shared mem array of size 64x64 to be transposed. But I don't have a clear idea how to express this in terms of Indexing maps they is in the reduction emitter.

@@ -87,7 +88,7 @@ TransposeFusion::TransposeFusion(const HloFusionAnalysis& analysis)
permutation_(transpose_.permutation),
input_shape_(
Permute(transpose_.dimensions, InversePermutation(permutation_))),
base_block_size_(WarpSize(analysis_.device_info())) {
base_block_size_(kTileSize) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@i-chaochen
Copy link

@zoranjovanovic-ns WDYT this PR #170

@zoranjovanovic-ns
Copy link
Author

@zoranjovanovic-ns WDYT this PR #170

There is number of #ifdefs that we cannot upstream, has the same issue with reduce as this PR (reduction algorithm probably should be modified), but if it fixes more tests then we can use it as temporary solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants