Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: bump jnorwood/helm-docs image version to v1.14.2 #3531

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/linter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:

- name: Check Docs
run: |
docker run --rm --volume "$(pwd):/helm-docs" -u "$(id -u)" jnorwood/helm-docs:v1.8.1
docker run --rm --volume "$(pwd):/helm-docs" -u "$(id -u)" jnorwood/helm-docs:v1.14.2
if ! git diff --exit-code; then
echo "Documentation not up to date. Please run helm-docs and commit changes!" >&2
exit 1
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Charts should start at `1.0.0`. Any breaking (backwards incompatible) changes to
The readme of each chart can be re-generated with the following command (run inside the chart directory):

```shell
docker run --rm --volume "$(pwd):/helm-docs" -u "$(id -u)" jnorwood/helm-docs:v1.8.1
docker run --rm --volume "$(pwd):/helm-docs" -u "$(id -u)" jnorwood/helm-docs:v1.14.2
```

### Community Requirements
Expand Down
50 changes: 0 additions & 50 deletions charts/grafana-sampling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,56 +89,6 @@ A major chart version change indicates that there is an incompatible breaking ch

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| alloy-deployment.alloy.configMap.create | bool | `false` | |
| alloy-deployment.alloy.extraPorts[0].name | string | `"otlp-grpc"` | |
| alloy-deployment.alloy.extraPorts[0].port | int | `4317` | |
| alloy-deployment.alloy.extraPorts[0].protocol | string | `"TCP"` | |
| alloy-deployment.alloy.extraPorts[0].targetPort | int | `4317` | |
| alloy-deployment.alloy.extraPorts[1].name | string | `"otlp-http"` | |
| alloy-deployment.alloy.extraPorts[1].port | int | `4318` | |
| alloy-deployment.alloy.extraPorts[1].protocol | string | `"TCP"` | |
| alloy-deployment.alloy.extraPorts[1].targetPort | int | `4318` | |
| alloy-deployment.alloy.resources.requests.cpu | string | `"1"` | |
| alloy-deployment.alloy.resources.requests.memory | string | `"2G"` | |
| alloy-deployment.controller.autoscaling.enabled | bool | `false` | Creates a HorizontalPodAutoscaler for controller type deployment. |
| alloy-deployment.controller.autoscaling.maxReplicas | int | `5` | The upper limit for the number of replicas to which the autoscaler can scale up. |
| alloy-deployment.controller.autoscaling.minReplicas | int | `2` | The lower limit for the number of replicas to which the autoscaler can scale down. |
| alloy-deployment.controller.autoscaling.targetCPUUtilizationPercentage | int | `0` | Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling. |
| alloy-deployment.controller.autoscaling.targetMemoryUtilizationPercentage | int | `80` | Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling. |
| alloy-deployment.controller.replicas | int | `1` | |
| alloy-deployment.controller.type | string | `"deployment"` | |
| alloy-deployment.nameOverride | string | `"deployment"` | Do not change this. |
| alloy-statefulset.alloy.configMap.create | bool | `false` | |
| alloy-statefulset.alloy.extraEnv[0].name | string | `"GRAFANA_CLOUD_API_KEY"` | |
| alloy-statefulset.alloy.extraEnv[0].value | string | `"<REQUIRED>"` | |
| alloy-statefulset.alloy.extraEnv[1].name | string | `"GRAFANA_CLOUD_PROMETHEUS_URL"` | |
| alloy-statefulset.alloy.extraEnv[1].value | string | `"<REQUIRED>"` | |
| alloy-statefulset.alloy.extraEnv[2].name | string | `"GRAFANA_CLOUD_PROMETHEUS_USERNAME"` | |
| alloy-statefulset.alloy.extraEnv[2].value | string | `"<REQUIRED>"` | |
| alloy-statefulset.alloy.extraEnv[3].name | string | `"GRAFANA_CLOUD_TEMPO_ENDPOINT"` | |
| alloy-statefulset.alloy.extraEnv[3].value | string | `"<REQUIRED>"` | |
| alloy-statefulset.alloy.extraEnv[4].name | string | `"GRAFANA_CLOUD_TEMPO_USERNAME"` | |
| alloy-statefulset.alloy.extraEnv[4].value | string | `"<REQUIRED>"` | |
| alloy-statefulset.alloy.extraEnv[5].name | string | `"POD_UID"` | |
| alloy-statefulset.alloy.extraEnv[5].valueFrom.fieldRef.apiVersion | string | `"v1"` | |
| alloy-statefulset.alloy.extraEnv[5].valueFrom.fieldRef.fieldPath | string | `"metadata.uid"` | |
| alloy-statefulset.alloy.extraPorts[0].name | string | `"otlp-grpc"` | |
| alloy-statefulset.alloy.extraPorts[0].port | int | `4317` | |
| alloy-statefulset.alloy.extraPorts[0].protocol | string | `"TCP"` | |
| alloy-statefulset.alloy.extraPorts[0].targetPort | int | `4317` | |
| alloy-statefulset.alloy.resources.requests.cpu | string | `"1"` | |
| alloy-statefulset.alloy.resources.requests.memory | string | `"2G"` | |
| alloy-statefulset.controller.autoscaling.enabled | bool | `false` | Creates a HorizontalPodAutoscaler for controller type deployment. |
| alloy-statefulset.controller.autoscaling.maxReplicas | int | `5` | The upper limit for the number of replicas to which the autoscaler can scale up. |
| alloy-statefulset.controller.autoscaling.minReplicas | int | `2` | The lower limit for the number of replicas to which the autoscaler can scale down. |
| alloy-statefulset.controller.autoscaling.targetCPUUtilizationPercentage | int | `0` | Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling. |
| alloy-statefulset.controller.autoscaling.targetMemoryUtilizationPercentage | int | `80` | Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling. |
| alloy-statefulset.controller.replicas | int | `1` | |
| alloy-statefulset.controller.type | string | `"statefulset"` | |
| alloy-statefulset.nameOverride | string | `"statefulset"` | Do not change this. |
| alloy-statefulset.rbac.create | bool | `false` | |
| alloy-statefulset.service.clusterIP | string | `"None"` | |
| alloy-statefulset.serviceAccount.create | bool | `false` | |
| batch.deployment | object | `{"send_batch_max_size":0,"send_batch_size":8192,"timeout":"200ms"}` | Configure batch processing options. |
| batch.statefulset.send_batch_max_size | int | `0` | |
| batch.statefulset.send_batch_size | int | `8192` | |
Expand Down
2 changes: 1 addition & 1 deletion charts/lgtm-distributed/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,4 +45,4 @@ Umbrella chart for a distributed Loki, Grafana, Tempo and Mimir stack
| tempo.ingester.replicas | int | `1` | |

----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.8.1](https://github.com/norwoodj/helm-docs/releases/v1.8.1)
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)
4 changes: 2 additions & 2 deletions charts/loki-distributed/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ kubectl delete statefulset RELEASE_NAME-loki-distributed-querier -n LOKI_NAMESPA
| compactor.livenessProbe | object | `{}` | liveness probe settings for ingester pods. If empty use `loki.livenessProbe` |
| compactor.nodeSelector | object | `{}` | Node selector for compactor pods |
| compactor.persistence.annotations | object | `{}` | Annotations for compactor PVCs |
| compactor.persistence.claims | list | `[{"name":"data","size":"10Gi","storageClass":null}]` | List of the compactor PVCs @notationType -- list |
| compactor.persistence.claims | list | | List of the compactor PVCs |
| compactor.persistence.enableStatefulSetAutoDeletePVC | bool | `false` | Enable StatefulSetAutoDeletePVC feature |
| compactor.persistence.enabled | bool | `false` | Enable creating PVCs for the compactor |
| compactor.persistence.size | string | `"10Gi"` | Size of persistent disk |
Expand Down Expand Up @@ -307,7 +307,7 @@ kubectl delete statefulset RELEASE_NAME-loki-distributed-querier -n LOKI_NAMESPA
| ingester.maxSurge | int | `0` | Max Surge for ingester pods |
| ingester.maxUnavailable | string | `nil` | Pod Disruption Budget maxUnavailable |
| ingester.nodeSelector | object | `{}` | Node selector for ingester pods |
| ingester.persistence.claims | list | `[{"name":"data","size":"10Gi","storageClass":null}]` | List of the ingester PVCs @notationType -- list |
| ingester.persistence.claims | list | | List of the ingester PVCs |
| ingester.persistence.enableStatefulSetAutoDeletePVC | bool | `false` | Enable StatefulSetAutoDeletePVC feature |
| ingester.persistence.enabled | bool | `false` | Enable creating PVCs which is required when using boltdb-shipper |
| ingester.persistence.inMemory | bool | `false` | Use emptyDir with ramdisk for storage. **Please note that all data in ingester will be lost on pod restart** |
Expand Down
2 changes: 1 addition & 1 deletion charts/synthetic-monitoring-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,4 @@ Kubernetes: `^1.16.0-0`
| tolerations | list | `[]` | List of node taints to tolerate. |

----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.8.1](https://github.com/norwoodj/helm-docs/releases/v1.8.1)
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)
Loading