Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi gpu example #304

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

multi gpu example #304

wants to merge 4 commits into from

Conversation

amwi04
Copy link
Collaborator

@amwi04 amwi04 commented Dec 16, 2024

  1. GPU will perform vector_add
  2. GPU will perform vector_substract

Copy link

copy-pr-bot bot commented Dec 16, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@leofang leofang self-requested a review December 16, 2024 16:13
@leofang leofang added enhancement Any code-related improvements P0 High priority - Must do! cuda.core Everything related to the cuda.core module labels Dec 16, 2024
@leofang leofang added this to the cuda.core beta 3 milestone Dec 16, 2024
@leofang leofang requested a review from vzhurba01 December 16, 2024 20:20
@leofang leofang linked an issue Dec 17, 2024 that may be closed by this pull request
Copy link
Member

@leofang leofang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Amod! I left some comments.

cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
dev0.set_current()
stream0 = dev0.create_stream()

# allocate memory to GPU0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# allocate memory to GPU0
# Allocate memory to GPU0
# Note: This runs on CuPy's current stream

dev1.set_current()
stream1 = dev1.create_stream()

# allocate memory to GPU1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# allocate memory to GPU1
# Allocate memory to GPU1
# Note: This runs on CuPy's current stream

cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
cuda_core/examples/simple_multi_gpu_example.py Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda.core Everything related to the cuda.core module enhancement Any code-related improvements P0 High priority - Must do!
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add a simple multi-GPU code sample based on cuda.core
3 participants