Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CUDA] Need a robust check for MPI <-> device mapping in CUDA #3

Open
wavefunction91 opened this issue Sep 25, 2020 · 0 comments
Open
Labels
bug Something isn't working cuda CUDA related Issue enhancement New feature or request

Comments

@wavefunction91
Copy link
Owner

wavefunction91 commented Sep 25, 2020

The check introduced in 7ba2f43 and reverted in 94a1f86 is not robust. Fails for the following resource configuration on Summit https://jsrunvisualizer.olcf.ornl.gov/?s1f0o11n6c7g1r11d1b27l0=

Shared memory MPI instances is not mutually exclusive from each MPI rank acknowledging a single (unique) device.

Current workaround is to ensure proper device mapping prior to integrator call based on known MPI <-> device affinity (i.e. replicate this according to a known affinity)

@wavefunction91 wavefunction91 added bug Something isn't working enhancement New feature or request labels Sep 25, 2020
@wavefunction91 wavefunction91 changed the title Need a robust check for MPI <-> device mapping in CUDA [CUDA] Need a robust check for MPI <-> device mapping in CUDA Oct 2, 2020
@wavefunction91 wavefunction91 added the cuda CUDA related Issue label Oct 2, 2020
wavefunction91 pushed a commit that referenced this issue Mar 12, 2021
wavefunction91 added a commit that referenced this issue Aug 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working cuda CUDA related Issue enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant