Skip to content

Conversation

loadams
Copy link
Collaborator

@loadams loadams commented Feb 13, 2025

  • Test all build cases this needs to support.
  • Test Windows builds/support. - Successfully built deepspeed-0.16.5+1d869d1f-cp311-cp311-win_amd64.whl
  • Confirm pre-compiling ops - works with --no-build-isolation
  • Test commit hashes are added to dev builds (when building wheels, error from python -m build - Successfully built deepspeed-0.16.5+1d869d1f.tar.gz and deepspeed-0.16.5+unknown-py3-none-any.whl
  • Add pyproject.toml to path triggers similar to setup.py.

The main goal of this effort is to become compliant with the coming changes to pip in 25.1 listed here which will break editable installs. Future PRs will fully move from setup.py to pyproject.toml

Fixes: #7031

MII equivalent PR: deepspeedai/DeepSpeed-MII#555
DS-Kernels equivalent PR: deepspeedai/DeepSpeed-Kernels#20

@jeffra
Copy link
Collaborator

jeffra commented Feb 14, 2025

@mrwyattii we just went through some of this with arctic training. If it’s helpful @loadams let’s discuss on slack a bit. There’s a ton that’s currently happening in setup.py, this could be a big lift? But I agree, needs to happen!

@loadams loadams changed the title Add pyproject.toml Add pyproject.toml with legacy build backend to keep most logic in setup.py Feb 19, 2025
@loadams
Copy link
Collaborator Author

loadams commented Feb 19, 2025

Edit: this is no longer correct with latest changes.

The current problem is that the logic inside setup.py aside from the call to setup() isn't run. This means that we don't append cupy into the requirements file, and tests fail as a result. This impacts other parts of the build experience, so we will need to do more work to switch to a modern build backend.

@loadams loadams marked this pull request as ready for review February 25, 2025 18:37
rraminen and others added 22 commits March 25, 2025 08:51
This change is required to successfully build fp_quantizer extension on
ROCm.

---------

Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
cc @tjruwase @jomayeri

---------

Co-authored-by: root <root@ftqtmec25000000.taxzvufipdhelhupulxcbvr15f.ux.internal.cloudapp.net>
Signed-off-by: Logan Adams <[email protected]>
Fix #7029 
- Add Chinese blog for deepspeed windows
- Fix format in README.md

Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Adding compile support for AIO library on AMD GPUs.

---------

Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Make trace cache warnings configurable, and disabled by default. 

Fix #6985, #4081, #5033, #5006, #5662

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Update CUDA compute capability for cross compile according to wiki page.
https://en.wikipedia.org/wiki/CUDA#GPUs_supported

---------

Signed-off-by: Hongwei <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
…ently, so we aren't seeing cupy installed.

Signed-off-by: Logan Adams <[email protected]>
Propagate API change.

Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
- add zero2 test
- minor fix with transformer version update & ds master merge.

Signed-off-by: inkcherry <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
bf16 with moe refresh optimizer state from bf16 ckpt will raise
IndexError: list index out of range

Signed-off-by: shaomin <[email protected]>
Co-authored-by: shaomin <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
**Auto-generated PR to update version.txt after a DeepSpeed release**
Released version - 0.16.4
Author           - @loadams

Co-authored-by: loadams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
@jeffra and I fixed this many years ago, so bringing this doc to a
correct state.

---------

Signed-off-by: Stas Bekman <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Description
This PR includes Tecorigin SDAA accelerator support.
With this PR, DeepSpeed supports SDAA as backend for training tasks.

---------

Signed-off-by: siqi <[email protected]>
Co-authored-by: siqi <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
A-transformer and others added 12 commits March 25, 2025 08:51
Keeps lines within PEP 8 length limits.
Enhances readability with a single, concise expression.
Preserves original functionality.

---------

Signed-off-by: Shaik Raza Sikander <[email protected]>
Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Max Kovalenko <[email protected]>
Signed-off-by: inkcherry <[email protected]>
Signed-off-by: shaomin <[email protected]>
Signed-off-by: Stas Bekman <[email protected]>
Signed-off-by: siqi <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Wei Wu <[email protected]>
Signed-off-by: ShellyNR <[email protected]>
Signed-off-by: Lai, Yejing <[email protected]>
Signed-off-by: Hongwei <[email protected]>
Signed-off-by: Liang Cheng <[email protected]>
Signed-off-by: A-transformer <[email protected]>
Co-authored-by: Raza Sikander <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Max Kovalenko <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: inkcherry <[email protected]>
Co-authored-by: wukong1992 <[email protected]>
Co-authored-by: shaomin <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Co-authored-by: loadams <[email protected]>
Co-authored-by: Stas Bekman <[email protected]>
Co-authored-by: siqi654321 <[email protected]>
Co-authored-by: siqi <[email protected]>
Co-authored-by: Wei Wu <[email protected]>
Co-authored-by: Masahiro Tanaka <[email protected]>
Co-authored-by: Shelly Nahir <[email protected]>
Co-authored-by: snahir <[email protected]>
Co-authored-by: Yejing-Lai <[email protected]>
Co-authored-by: A-transformer <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Unpin transformers version for all workflows except
`nv-torch-latest-v100` as this still has a tolerance issue with some
quantization tests.

Signed-off-by: Logan Adams <[email protected]>
Resolves #6997 

This PR conditionally quotes environment variable values—only wrapping
those containing special characters (like parentheses) that could
trigger bash errors. Safe values remain unquoted.

---------

Signed-off-by: Saurabh <[email protected]>
Signed-off-by: Saurabh Koshatwar <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Correct the BACKWARD_PREFETCH_SUBMIT mismatch
FORWARD_PREFETCH_SUBMIT = 'forward_prefetch_submit'

---------

Signed-off-by: Shaik Raza Sikander <[email protected]>
Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Max Kovalenko <[email protected]>
Signed-off-by: inkcherry <[email protected]>
Signed-off-by: shaomin <[email protected]>
Signed-off-by: Stas Bekman <[email protected]>
Signed-off-by: siqi <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Wei Wu <[email protected]>
Signed-off-by: ShellyNR <[email protected]>
Signed-off-by: Lai, Yejing <[email protected]>
Signed-off-by: Hongwei <[email protected]>
Signed-off-by: A-transformer <[email protected]>
Co-authored-by: Raza Sikander <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Max Kovalenko <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: inkcherry <[email protected]>
Co-authored-by: wukong1992 <[email protected]>
Co-authored-by: shaomin <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Co-authored-by: loadams <[email protected]>
Co-authored-by: Stas Bekman <[email protected]>
Co-authored-by: siqi654321 <[email protected]>
Co-authored-by: siqi <[email protected]>
Co-authored-by: Wei Wu <[email protected]>
Co-authored-by: Masahiro Tanaka <[email protected]>
Co-authored-by: Shelly Nahir <[email protected]>
Co-authored-by: snahir <[email protected]>
Co-authored-by: Yejing-Lai <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
…Tests (#7146)

Enhancing  ci/nightly coverage for gaudi2 device
Tests added :
        test_autotp_training.py
        test_ulysses.py
	test_linear::TestLoRALinear and test_linear::TestBasicLinear
	test_ctx::TestEngine
these provide coverage for model_parallesim and linear feature.
The tests are stable. 10/10 runs pass.
New tests addition is expected to increase ci time by 3-4 mins and
nightly job time by 15 min.

Signed-off-by: Shaik Raza Sikander <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Changes from huggingface/transformers#36654 in
transformers cause issues with the torch 2.5 version we were using. This
just updated us to use a newer version.

---------

Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Masahiro Tanaka <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
@tjruwase Don't merge yet, I will leave a comment when it is ready for
merge. Thank you.

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: inkcherry <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
)

This PR is a continuation of the efforts to improve DeepSpeed
performance when using PyTorch compile.

Dynamo breaks the graph because `flat_tensor.requires_grad = False`:

* Is a side-effecting operation on tensor metadata
* Occurs in a context where Dynamo expects static tensor properties for
tracing

`flat_tensor.requires_grad` is redundant and can be safely removed
because:
* `_allgather_params()` function is already decorated with
`@torch.no_grad()` which ensures the desired property
* `flat_tensor` is created using the `torch.empty()` which sets the
`requires_grad=False` by default.

---------

Signed-off-by: Max Kovalenko <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
ZeRO3 requires explicit cleaning in tests when reusing the environment.
This PR adds `destroy` calls to the tests to free memory and avoid
potential errors due to memory leaks.

Signed-off-by: Masahiro Tanaka <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: c8ef <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Hongwei <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Comment on lines +4 to +6
"setuptools>=64",
"torch",
"wheel"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you depend on setuptools 70.1 or later, you won't need wheel.

Suggested change
"setuptools>=64",
"torch",
"wheel"
"setuptools>=70.1",
"torch"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix - Update DeepSpeed to be PEP517 compliant, update to pyproject.toml