|
| 1 | +--- |
| 2 | +title: Helm Benchmarking |
| 3 | +weight: 6 |
| 4 | + |
| 5 | +### FIXED, DO NOT MODIFY |
| 6 | +layout: learningpathall |
| 7 | +--- |
| 8 | + |
| 9 | + |
| 10 | +## Helm Benchmark on GCP SUSE Arm64 VM |
| 11 | +This guide explains **how to benchmark Helm on an Arm64-based GCP SUSE VM** using only the **Helm CLI**. |
| 12 | +Since Helm does not provide built-in performance metrics, we measure **concurrency behavior** by running multiple Helm commands in parallel and recording the total execution time. |
| 13 | + |
| 14 | +### Prerequisites |
| 15 | +Before starting the benchmark, ensure Helm is installed and the Kubernetes cluster is accessible. |
| 16 | + |
| 17 | +```console |
| 18 | +helm version |
| 19 | +kubectl get nodes |
| 20 | +``` |
| 21 | + |
| 22 | +All nodes should be in `Ready` state. |
| 23 | + |
| 24 | + |
| 25 | +### Add Helm Repository |
| 26 | +Helm installs applications using “charts.” |
| 27 | +This step tells Helm where to download those charts from and updates its local chart list. |
| 28 | + |
| 29 | +```console |
| 30 | +helm repo add bitnami https://charts.bitnami.com/bitnami |
| 31 | +helm repo update |
| 32 | +``` |
| 33 | + |
| 34 | +### Create Benchmark Namespace |
| 35 | +Isolate benchmark workloads from other cluster resources. |
| 36 | + |
| 37 | +```console |
| 38 | +kubectl create namespace helm-bench |
| 39 | +``` |
| 40 | + |
| 41 | +### Warm-Up Run (Recommended) |
| 42 | +This step prepares the cluster by pulling container images and initializing caches. |
| 43 | + |
| 44 | +```console |
| 45 | +helm install warmup bitnami/nginx \ |
| 46 | + -n helm-bench \ |
| 47 | + --set service.type=ClusterIP \ |
| 48 | + --timeout 10m |
| 49 | +``` |
| 50 | +The first install is usually slower because of following reasons: |
| 51 | + |
| 52 | +- Images must be downloaded. |
| 53 | +- Kubernetes initializes internal objects. |
| 54 | + |
| 55 | +This warm-up ensures the real benchmark measures Helm performance, not setup overhead. |
| 56 | + |
| 57 | +**After validation, remove the warm-up deployment:** |
| 58 | + |
| 59 | +```console |
| 60 | +helm uninstall warmup -n helm-bench |
| 61 | +``` |
| 62 | + |
| 63 | +{{% notice Note %}} |
| 64 | +Helm does not provide native concurrency or throughput metrics. Concurrency benchmarking is performed by executing multiple Helm CLI operations in parallel and measuring overall completion time. |
| 65 | +{{% /notice %}} |
| 66 | + |
| 67 | +### Concurrent Helm Install Benchmark (No Wait) |
| 68 | +Run multiple Helm installs in parallel using background jobs. |
| 69 | + |
| 70 | +```console |
| 71 | +time ( |
| 72 | +for i in {1..5}; do |
| 73 | + helm install nginx-$i bitnami/nginx \ |
| 74 | + -n helm-bench \ |
| 75 | + --set service.type=ClusterIP \ |
| 76 | + --timeout 10m & |
| 77 | +done |
| 78 | +wait |
| 79 | +) |
| 80 | +``` |
| 81 | +This step simulates multiple teams deploying applications at the same time. |
| 82 | +Helm submits all requests without waiting for pods to fully start. |
| 83 | + |
| 84 | +What this measures: |
| 85 | + |
| 86 | +* Helm concurrency handling |
| 87 | +* Kubernetes API responsiveness |
| 88 | +* Arm64 client-side performance |
| 89 | + |
| 90 | +You should see an output similar to: |
| 91 | +```output |
| 92 | +real 0m4.109s |
| 93 | +user 0m12.178s |
| 94 | +sys 0m0.470s |
| 95 | +``` |
| 96 | + |
| 97 | +### Verify Deployments |
| 98 | + |
| 99 | +This confirms: |
| 100 | + |
| 101 | +- Helm reports that all components were installed successfully |
| 102 | +- Kubernetes actually created and started the applications |
| 103 | + |
| 104 | +```console |
| 105 | +helm list -n helm-bench |
| 106 | +kubectl get pods -n helm-bench |
| 107 | +``` |
| 108 | + |
| 109 | +Expected: |
| 110 | + |
| 111 | +* All releases in `deployed` state |
| 112 | +* Pods in `Running` status |
| 113 | + |
| 114 | +### Concurrent Helm Install Benchmark (With `--wait`) |
| 115 | +This benchmark includes workload readiness time. |
| 116 | + |
| 117 | +```console |
| 118 | +time ( |
| 119 | +for i in {1..3}; do |
| 120 | + helm install nginx-wait-$i bitnami/nginx \ |
| 121 | + -n helm-bench \ |
| 122 | + --set service.type=ClusterIP \ |
| 123 | + --wait \ |
| 124 | + --timeout 15m & |
| 125 | +done |
| 126 | +wait |
| 127 | +) |
| 128 | +``` |
| 129 | + |
| 130 | +What this measures: |
| 131 | + |
| 132 | +* Helm concurrency plus scheduler and image-pull contention |
| 133 | +* End-to-end readiness impact |
| 134 | + |
| 135 | +You should see an output similar to: |
| 136 | +```output |
| 137 | +WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. |
| 138 | +For production installations, please set the following values according to your workload needs: - cloneStaticSiteFromGit.gitSync.resources - resources +info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
| 139 | +real 0m12.758s |
| 140 | +user 0m7.360s |
| 141 | +sys 0m0.227s |
| 142 | +``` |
| 143 | + |
| 144 | +### Metrics to Record |
| 145 | + |
| 146 | +- **Total elapsed time**: Overall time taken to complete all installs. |
| 147 | +- **Number of parallel installs**: Number of Helm installs run at the same time. |
| 148 | +- **Failures**: Any Helm failures or Kubernetes API errors. |
| 149 | +- **Pod readiness delay**: Time pods take to become Ready (resource pressure) |
| 150 | + |
| 151 | +### Benchmark summary on x86_64 |
| 152 | +To compare the benchmark results, the following results were collected by running the same benchmark on an `x86 - c4-standard-4` (4 vCPUs, 15 GB Memory) x86_64 VM in GCP, running SUSE: |
| 153 | + |
| 154 | +| Test Case | Parallel Installs | `--wait` Used | Timeout | Total Time (real) | |
| 155 | +| ---------------------------- | ----------------- | ------------- | ------- | ----------------- | |
| 156 | +| Parallel Install (No Wait) | 5 | No | 10m | **6.06 s** | |
| 157 | +| Parallel Install (With Wait) | 3 | Yes | 15m | **14.41 s** | |
| 158 | + |
| 159 | + |
| 160 | +### Benchmark summary on Arm64 |
| 161 | +Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE): |
| 162 | + |
| 163 | +| Test Case | Parallel Installs | `--wait` Used | Timeout | Total Time (real) | |
| 164 | +| ---------------------------- | ----------------- | ------------- | ------- | ----------------- | |
| 165 | +| Parallel Install (No Wait) | 5 | No | 10m | **4.11 s** | |
| 166 | +| Parallel Install (With Wait) | 3 | Yes | 15m | **12.76 s** | |
| 167 | + |
| 168 | +### Helm Benchmark comparison insights |
| 169 | + |
| 170 | +- **Arm64 shows faster Helm execution** for both warm and ready states, indicating efficient CLI and Kubernetes API handling on Arm-based GCP instances. |
| 171 | +- **The `--wait` flag significantly increases total execution time** because Helm waits for pods and services to reach a Ready state, revealing scheduler latency and image-pull delays rather than Helm CLI overhead. |
| 172 | +- **Parallel Helm installs scale well on Arm64**, with minimal contention observed even at higher concurrency levels. |
| 173 | +- **End-to-end workload readiness dominates benchmark results**, showing that cluster resource availability and container image pulls |
0 commit comments