Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU accelerated backends #18

Open
fluidnumerics-joe opened this issue Dec 6, 2023 · 0 comments
Open

GPU accelerated backends #18

fluidnumerics-joe opened this issue Dec 6, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@fluidnumerics-joe
Copy link
Member

Now that support is in place for rank 1 through rank 4 arrays (fp32 and fp64), it's time to look into supporting GPU acceleration for function evaluation.

To support portability between Nvidia and AMD GPUs, I'm thinking of using AMD's HIP. Because of the status of ROCm and spotty support for Windows (and no support for MacOS), this build feature will need to be optional. Additionally, because some users may be on systems that do not have ROCm installed, and only the CUDA toolkit, we'll need to use pre-processing to map procedures for GPU memory management to either the CUDA or HIP methods.

To do list

Build system

[] Add option for enabling HIP
[] Add option for enabling CUDA
[] Add CUDA and HIP build options to spack package
[] Build with HIP support with fpm ?
[] Build with CUDA support with fpm ?

Compute Kernels

We will need to have the following element-wise functions/operations defined as HIP kernels with 32-bit and 64-bit data for device pointers

[] c = a+b
[] c = a-b
[] c = a*b
[] c = a/b
[] c = a^s (s is a scalar)
[] c = \abs(a)
[] c = \cos(a)
[] c = \sin(a)
[] c = \tan(a)
[] c = \acos(a)
[] c = \asin(a)
[] c = \atan(a)
[] c = \sinh(a)
[] c = \cosh(a)
[] c = \tanh(a)
[] c = \sqrt(a)
[] c = \ln(a) (natural logarithm)
[] c = \log(a) (log base-10)
[] c = -a (sign flip)

@fluidnumerics-joe fluidnumerics-joe added the enhancement New feature or request label Dec 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant