You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/blog/running-gpu-based-functions-on-fission.md
+10-18Lines changed: 10 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ In this guide, we will show you how to set up a GPU-enabled Fission environment
19
19
GPUs are efficient for SIMD (Single Instruction, Multiple Data) computations, which are commonly used in deep learning and matrix operations.
20
20
Many serverless workloads need to perform these operations, and GPUs can help you run them more efficiently.
21
21
22
-
Fission users have been using Fission for ML model deployment and various use cases, some of organizations are using Fission for production workloads and need to run GPU-based functions to meet their performance requirements.
22
+
Fission users have been using Fission for ML model deployment and various use cases, some of the organizations are using Fission for production workloads and need to run GPU-based functions to meet their performance requirements.
23
23
24
24
## Pre Requisites
25
25
@@ -33,7 +33,7 @@ Please refer to [Kubernetes GPU Support](https://kubernetes.io/docs/tasks/manage
33
33
34
34
Nvidia GPU operator helps in managing GPU resources in Kubernetes cluster. It provides a way to configure and manage GPUs in Kubernetes.
35
35
You can refer to [Guide to NVIDIA GPU Operator in Kubernetes](https://www.infracloud.io/blogs/guide-to-nvidia-gpu-operator-in-kubernetes/).
36
-
You should have see nodes with gpu label in your cluster.
36
+
You should have seen nodes with gpu label in your cluster.
37
37
38
38
```bash
39
39
$ kubectl get node -l nvidia.com/gpu.present=true
@@ -51,9 +51,9 @@ Before you start working on this demo, you need to ensure that you have Fission
51
51
52
52
Fission function need an environment to run the function code. For running GPU based functions, we need to create an environment which can leverage the GPU resources.
53
53
54
-
Following are the steps to create a environment with GPU support and run a GPU based function.
54
+
Following are the steps to create an environment with GPU support and run a GPU based function.
55
55
56
-
- We would a Python based environment runtime and builder images with all the dependencies installed for running a GPU based function. Eg. Pytorch, Cuda, etc.
56
+
- We would create a Python based environment runtime and builder images with all the dependencies installed for running a GPU based function. E.g. Pytorch, Cuda, etc.
57
57
- Verify the environment and builder images are functional and can utilize the GPU resources.
58
58
- Create a function package using [sentiment analysis model from huggingface](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) and then create a function using this package.
59
59
- Run the function and verify sentiment analysis for a given sentence.
@@ -171,22 +171,14 @@ In this step, we will do following things:
171
171
```
172
172
173
173
- The `fission env create` command will create two deployments. One deployment named `poolmgr-python-default-*` for environment and another for builder named `python-*`.
174
-
- Edit the environment deployment and add GPU resources to `python` environment container.
175
-
176
-
```yaml
177
-
resources:
178
-
limits:
179
-
nvidia.com/gpu: "1"
180
-
requests:
181
-
nvidia.com/gpu: "1"
182
-
```
174
+
- Patch the environment deployment to add GPU resources to `python` environment container and set `nodeSelector` to schedule pods on a GPU node using `kubectl patch` command.
0 commit comments