Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit d3fa094

Browse files
authoredMay 6, 2025
feat: add some torch.distributed examples (#315)
* feat: add test_dist_all script * feat: init mp worker & ray worker * feat: add test_dist_all script * feat: add test all_to_all_single uneven * feat: add test all_to_all_single uneven * feat: add test all_to_all_single uneven * feat: add ray all_to_all_single uneven example * feat: add ray all_to_all_single uneven example * feat: add ray all_to_all_single uneven example * feat: add ray all_to_all_single uneven example * feat: add ray all_to_all_single uneven example * feat: add ray all_to_all_single uneven example
1 parent 0c41424 commit d3fa094

File tree

2 files changed

+3
-0
lines changed

2 files changed

+3
-0
lines changed
 

‎CONTRIBUTE.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ Before submitting code, configure pre-commit, for example:
1111
# fork xlite-dev/LeetCUDA to your own github page, then:
1212
git clone git@github.com:your-github-page/your-fork-LeetCUDA.git
1313
cd your-fork-LeetCUDA && git checkout -b test
14+
# update submodule
15+
git submodule update --init --recursive --force
1416
# install pre-commit
1517
pip3 install pre-commit
1618
pre-commit install

‎others/pytorch/distributed/test_all_to_all_single_ray.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,7 @@ def run(self) -> torch.Tensor:
119119

120120

121121
if __name__ == "__main__":
122+
# export RAY_DEDUP_LOGS=0
122123
world_size = torch.cuda.device_count()
123124
print(f"world_size: {world_size}")
124125
if not ray.is_initialized():

0 commit comments

Comments
 (0)
Please sign in to comment.