Skip to content

Commit 32ed46b

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent 3af22e5 commit 32ed46b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+228
-34
lines changed

helm-charts/codegen-openshift-rhoai/Chart.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: v2
25
name: codegen
36
description: A Helm chart for deploying codegen on Red Hat OpenShift with Red Hat OpenShift AI

helm-charts/codegen-openshift-rhoai/README.md

Lines changed: 28 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,19 @@
22

33
Helm chart for deploying CodeGen service on Red Hat OpenShift with Red Hat OpenShift AI.
44

5-
Serving runtime template in this example uses model *ise-uiuc/Magicoder-S-DS-6.7B* for Xeon and *meta-llama/CodeLlama-7b-hf* for Gaudi.
5+
Serving runtime template in this example uses model _ise-uiuc/Magicoder-S-DS-6.7B_ for Xeon and _meta-llama/CodeLlama-7b-hf_ for Gaudi.
66

7-
## Prerequisites
7+
## Prerequisites
88

9-
1. **Red Hat OpenShift Cluster** with dynamic *StorageClass* to provision *PersistentVolumes* e.g. **OpenShift Data Foundation**) and installed Operators: **Red Hat - Authorino (Technical Preview)**, **Red Hat OpenShift Service Mesh**, **Red Hat OpenShift Serverless** and **Red Hat Openshift AI**.
9+
1. **Red Hat OpenShift Cluster** with dynamic _StorageClass_ to provision _PersistentVolumes_ e.g. **OpenShift Data Foundation**) and installed Operators: **Red Hat - Authorino (Technical Preview)**, **Red Hat OpenShift Service Mesh**, **Red Hat OpenShift Serverless** and **Red Hat Openshift AI**.
1010
2. Image registry to push there docker images (https://docs.openshift.com/container-platform/4.16/registry/securing-exposing-registry.html).
1111
3. Access to S3-compatible object storage bucket (e.g. **OpenShift Data Foundation**, **AWS S3**) and values of access and secret access keys and S3 endpoint (https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/accessing-the-multicloud-object-gateway-with-your-applications_rhodf#accessing-the-multicloud-object-gateway-with-your-applications_rhodf).
12-
4. Account on https://huggingface.co/, access to model *ise-uiuc/Magicoder-S-DS-6.7B* (for Xeon) or *meta-llama/CodeLlama-7b-hf* (for Gaudi) and token with Read permissions.
12+
4. Account on https://huggingface.co/, access to model _ise-uiuc/Magicoder-S-DS-6.7B_ (for Xeon) or _meta-llama/CodeLlama-7b-hf_ (for Gaudi) and token with Read permissions.
1313

1414
## Deploy model in Red Hat Openshift AI
1515

16-
1. Login to OpenShift CLI and run following commands to create new serving runtime and *hf-token* secret.
16+
1. Login to OpenShift CLI and run following commands to create new serving runtime and _hf-token_ secret.
17+
1718
```
1819
cd GenAIInfra/helm-charts/codegen-openshift-rhoai/
1920
export HFTOKEN="insert-your-huggingface-token-here"
@@ -25,24 +26,29 @@ On Gaudi:
2526
helm install servingruntime tgi --set global.huggingfacehubApiToken=${HFTOKEN} --values tgi/gaudi-values.yaml
2627
```
2728

28-
Verify if template has been created with ```oc get template -n redhat-ods-applications``` command.
29+
Verify if template has been created with `oc get template -n redhat-ods-applications` command.
2930

3031
2. Find the route for **Red Hat OpenShift AI** dashboard with below command and open it in the browser:
32+
3133
```
3234
oc get routes -A | grep rhods-dashboard
3335
```
34-
3. Go to **Data Science Project** and clik **Create data science project**. Fill the **Name** and click **Create**.
35-
4. Go to **Workbenches** tab and clik **Create workbench**. Fill the **Name**, under **Notebook image** choose *Standard Data Science*, under **Cluster storage** choose *Create new persistent storage* and change **Persistent storage size** to 40 GB. Click **Create workbench**.
36+
37+
3. Go to **Data Science Project** and click **Create data science project**. Fill the **Name** and click **Create**.
38+
4. Go to **Workbenches** tab and click **Create workbench**. Fill the **Name**, under **Notebook image** choose _Standard Data Science_, under **Cluster storage** choose _Create new persistent storage_ and change **Persistent storage size** to 40 GB. Click **Create workbench**.
3639
5. Open newly created Jupiter notebook and run following commands to download the model and upload it on s3:
40+
3741
```
3842
%env S3_ENDPOINT=<S3_RGW_ROUTE>
3943
%env S3_ACCESS_KEY=<AWS_ACCESS_KEY_ID>
4044
%env S3_SECRET_KEY=<AWS_SECRET_ACCESS_KEY>
4145
%env HF_TOKEN=<PASTE_HUGGINGFACE_TOKEN>
4246
```
47+
4348
```
4449
!pip install huggingface-hub
4550
```
51+
4652
```
4753
import os
4854
import boto3
@@ -63,15 +69,21 @@ s3_resource = session.resource('s3',
6369
aws_secret_access_key=s3_secretkey)
6470
bucket = s3_resource.Bucket(bucket_name)
6571
```
66-
For Xeon download *ise-uiuc/Magicoder-S-DS-6.7B*:
72+
73+
For Xeon download _ise-uiuc/Magicoder-S-DS-6.7B_:
74+
6775
```
6876
snapshot_download("ise-uiuc/Magicoder-S-DS-6.7B", cache_dir=f'./models', token=hf_token)
6977
```
70-
For Gaudi download *meta-llama/CodeLlama-7b-hf*:
78+
79+
For Gaudi download _meta-llama/CodeLlama-7b-hf_:
80+
7181
```
7282
snapshot_download("meta-llama/CodeLlama-7b-hf", cache_dir=f'./models', token=hf_token)
7383
```
84+
7485
Upload the downloaded model to S3:
86+
7587
```
7688
files = (file for file in glob.glob(f'{path}/**/*', recursive=True) if os.path.isfile(file) and "snapshots" in file)
7789
for filename in files:
@@ -80,8 +92,8 @@ for filename in files:
8092
bucket.upload_file(filename, f'{path}{s3_name}')
8193
```
8294

83-
6. Go to your project in **Red Hat OpenShift AI** dashboard, then "Models" tab and click **Deploy model** under *Single-model serving platform*. Fill the **Name**, choose newly created **Serving runtime**: *Text Generation Inference Magicoder-S-DS-6.7B on CPU* (for Xeon) or *Text Generation Inference CodeLlama-7b-hf on Gaudi* (for Gaudi), **Model framework**: *llm* and change **Model server size** to *Custom*: 16 CPUs and 64 Gi memory. For deployment with Gaudi select proper **Accelerator**. Click the checkbox to create external route in **Model route** section and uncheck the **Token authentication**. Under **Model location** choose *New data connection* and fill all required fields for s3 access, **Bucket** *first.bucket* and **Path**: *models*. Click **Deploy**. It takes about 10 minutes to get *Loaded* status.\
84-
If it's not going to *Loaded* status and revision changed status to "ProgressDeadlineExceeded" (``oc get revision``), scale model deployment to 0 and than to 1 with command ``oc scale deployment.apps/<model_deployment_name> --replicas=1`` and wait about 10 minutes for deployment.
95+
6. Go to your project in **Red Hat OpenShift AI** dashboard, then "Models" tab and click **Deploy model** under _Single-model serving platform_. Fill the **Name**, choose newly created **Serving runtime**: _Text Generation Inference Magicoder-S-DS-6.7B on CPU_ (for Xeon) or _Text Generation Inference CodeLlama-7b-hf on Gaudi_ (for Gaudi), **Model framework**: _llm_ and change **Model server size** to _Custom_: 16 CPUs and 64 Gi memory. For deployment with Gaudi select proper **Accelerator**. Click the checkbox to create external route in **Model route** section and uncheck the **Token authentication**. Under **Model location** choose _New data connection_ and fill all required fields for s3 access, **Bucket** _first.bucket_ and **Path**: _models_. Click **Deploy**. It takes about 10 minutes to get _Loaded_ status.\
96+
If it's not going to _Loaded_ status and revision changed status to "ProgressDeadlineExceeded" (`oc get revision`), scale model deployment to 0 and than to 1 with command `oc scale deployment.apps/<model_deployment_name> --replicas=1` and wait about 10 minutes for deployment.
8597

8698
## Install the Chart
8799

@@ -101,11 +113,12 @@ sed -i "s/insert-your-namespace-here/${NAMESPACE}/g" codegen-openshift-rhoai/llm
101113
helm dependency update codegen-openshift-rhoai
102114

103115
helm install codegen codegen-openshift-rhoai --set image.repository=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/codegen --set llm-uservice.image.repository=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/llm-tgi --set react-ui.image.repository=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/react-ui --set global.clusterDomain=${CLUSTERDOMAIN} --set global.huggingfacehubApiToken=${HFTOKEN} --set llm-uservice.servingRuntime.name=${MODELNAME} --set llm-uservice.servingRuntime.namespace=${PROJECT}
104-
```
116+
```
105117

106118
## Verify
107119

108-
To verify the installation, run the command `oc get pods` to make sure all pods are running. Wait about 5 minutes for building images. When 4 pods achieve *Completed* status, the rest with services should go to *Running*.
120+
To verify the installation, run the command `oc get pods` to make sure all pods are running. Wait about 5 minutes for building images. When 4 pods achieve _Completed_ status, the rest with services should go to _Running_.
109121

110122
## Launch the UI
111-
To access the frontend, find the route for *react-ui* with command `oc get routes` and open it in the browser.
123+
124+
To access the frontend, find the route for _react-ui_ with command `oc get routes` and open it in the browser.

helm-charts/codegen-openshift-rhoai/llm-uservice/Chart.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: v2
25
name: llm-uservice
36
description: A Helm chart for deploying llm-uservice on Red Hat OpenShift with Red Hat OpenShift AI

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/buildconfig.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
kind: BuildConfig
25
apiVersion: build.openshift.io/v1
36
metadata:

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/configmap.yaml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: v1
25
kind: ConfigMap
36
metadata:
@@ -7,7 +10,7 @@ data:
710
#!/bin/bash
811
EXISTS=$(oc get secret --ignore-not-found rhoai-ca-bundle)
912
10-
if [[ -z "${EXISTS}" ]]; then
13+
if [[ -z "${EXISTS}" ]]; then
1114
oc create secret generic -n {{ .Release.Namespace }} rhoai-ca-bundle --from-literal=tls.crt="$(oc extract secret/knative-serving-cert -n istio-system --to=- --keys=tls.crt)"
1215
else
1316
echo "oc get secret --ignore-not-found rhoai-ca-bundle returned non-empty string, not creating a secret"

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/deployment.yaml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: apps/v1
25
kind: Deployment
36
metadata:
@@ -14,7 +17,7 @@ spec:
1417
template:
1518
metadata:
1619
labels:
17-
{{- include "llm-uservice.selectorLabels" . | nindent 8 }}
20+
{{- include "llm-uservice.selectorLabels" . | nindent 8 }}
1821
spec:
1922
securityContext: {}
2023
containers:

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/imagestream.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
---
25
apiVersion: image.openshift.io/v1
36
kind: ImageStream

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/job.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: batch/v1
25
kind: Job
36
metadata:

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/rbac/crb-rhoai.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
apiVersion: rbac.authorization.k8s.io/v1
25
kind: ClusterRoleBinding
36
metadata:

helm-charts/codegen-openshift-rhoai/llm-uservice/templates/rbac/role.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
14
{{- range $key, $value := .Values.rbac.roles }}
25
{{- if $value.createRole }}
36
---

0 commit comments

Comments
 (0)