Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update libcudacxx to 2.1.0 #464

Merged
merged 2 commits into from
Oct 16, 2023
Merged

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Sep 22, 2023

Description

This PR separates out the libcudacxx update from #399. I am proposing to update only libcudacxx to 2.1.0, and leave thrust/cub pinned at 1.17.2 until all of RAPIDS is ready to update. Then we can move forward with #399 next.

Separating the update for libcudacxx should allow RAPIDS to use some of the new features we want while giving more time to RAPIDS libraries to migrate to CCCL 2.1.0 (particularly for breaking changes in Thrust/CUB).

Immediate benefits of bumping only libcudacxx to 2.1.0:

Risk Assessment:
This should be fairly low risk because libcudacxx 2.1.0 is similar to our current pinning of 1.9.1 -- the major version bump was meant to align with Thrust/CUB and isn't indicative of major breaking changes.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.
  • The cmake-format.json is up to date with these changes.
  • I have added new files under rapids-cmake/
    • I have added include guards (include_guard(GLOBAL))
    • I have added the associated docs/ rst file and update the api.rst

@bdice
Copy link
Contributor Author

bdice commented Oct 14, 2023

Tracking CI pass/fail in these PRs:

I think it should be sufficient if these pass, since these are the only RAPIDS repos that explicitly use #include <cuda/...>. There's no internal use of libcudacxx in Thrust/CUB until we update Thrust/CUB as well, so I think it's isolated to RAPIDS libraries' own use of libcudacxx.

@bdice bdice added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Oct 14, 2023
@bdice bdice self-assigned this Oct 14, 2023
@bdice
Copy link
Contributor Author

bdice commented Oct 16, 2023

cuSpatial is a bit tricky to test with an updated libcudacxx package unless I build it locally from source. I am going to attempt that tomorrow to verify, but overall I expect the risk of breaking builds by updating libcudacxx to 2.1.0 is very low.

@bdice bdice marked this pull request as ready for review October 16, 2023 15:05
@bdice
Copy link
Contributor Author

bdice commented Oct 16, 2023

I verified that cuSpatial builds locally. This PR should be ready to review and merge.

@bdice bdice requested review from robertmaynard and vyasr October 16, 2023 18:47
@bdice
Copy link
Contributor Author

bdice commented Oct 16, 2023

/merge

@rapids-bot rapids-bot bot merged commit e73fef6 into rapidsai:branch-23.12 Oct 16, 2023
15 checks passed
sleeepyjack pushed a commit to NVIDIA/cuCollections that referenced this pull request Oct 16, 2023
This updates cuCollections to rapids-cmake 23.12. This comes with
rapidsai/rapids-cmake#464, which updates
libcudacxx to 2.1.0. That should unblock several cuCollections issues
such as #332,
#331,
#289.
@miscco
Copy link
Contributor

miscco commented Oct 17, 2023

That is amazing, thanks a lot for pushing this through

rapids-bot bot pushed a commit to rapidsai/cugraph that referenced this pull request Dec 8, 2023
This PR adds `cuda::proclaim_return_type` to device lambdas used in `thrust::transform` and `thrust::make_transform_iterator`.

This PR requires libcudacxx 2.1.0, which was provided by rapidsai/rapids-cmake#464.

Closes #3863.

Authors:
  - Seunghwa Kang (https://github.com/seunghwak)
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Bradley Dice (https://github.com/bdice)
  - Chuck Hastings (https://github.com/ChuckHastings)

URL: #3862
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improves an existing functionality non-breaking Introduces a non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants