Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] ServiceMonitor resources cat not deployed from Helm Chart #24

Open
GotoRen opened this issue Mar 4, 2024 · 0 comments
Open

[Bug] ServiceMonitor resources cat not deployed from Helm Chart #24

GotoRen opened this issue Mar 4, 2024 · 0 comments

Comments

@GotoRen
Copy link

GotoRen commented Mar 4, 2024

🐞 Bug Report

Bug details

I have enabled ServiceMonitor but the manifest is not deployed.

Expected behavior

If serviceMonitor.enabled: true, the ServiceMonitor manifest should be available.

Steps to reproduce (including prerequisites)

Specifically, prepare the following manifest and run the command.

  • values.yaml
# Default values for prometheus-yace-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: quay.io/invisionag/yet-another-cloudwatch-exporter
  tag: v0.28.0-alpha
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 80
  annotations: {}
  labels: {}

ingress:
  enabled: false
  annotations:
    {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 500m
    memory: 2Gi
  requests:
    cpu: 500m
    memory: 2Gi

nodeSelector: {}

tolerations: []

affinity: {}

podAnnoatations: {}

podLabels: {}

extraArgs: []
#   decoupled-scraping: false
#   scraping-interval: 300

aws:
  role:

  # The name of a pre-created secret in which AWS credentials are stored. When
  # set, aws_access_key_id is assumed to be in a field called access_key,
  # aws_secret_access_key is assumed to be in a field called secret_key, and the
  # session token, if it exists, is assumed to be in a field called
  # security_token
  secret:
    name:
    includesSessionToken: false

  # Note: Do not specify the aws_access_key_id and aws_secret_access_key if you specified role or secret.name before
  aws_access_key_id:
  aws_secret_access_key:

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  annotations: {}
  labels: {}
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: prometheus-yace-exporter

rbac:
  # Specifies whether RBAC resources should be created
  create: true

serviceMonitor:
  # When set true then use a ServiceMonitor to configure scraping
  enabled: true
  # Set the namespace the ServiceMonitor should be deployed
  # namespace: monitoring
  # Set how frequently Prometheus should scrape
  interval: 60s
  # Set targetPort for serviceMonitor
  port: http
  # Set path to cloudwatch-exporter telemtery-path
  # telemetryPath: /metrics
  # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
  labels:
    prometheus: agent
  # Set timeout for scrape
  timeout: 30s

config: |-
  discovery:
    jobs:
      - type: AWS/ElastiCache
        regions:
          - ap-northeast-1
        period: 600
        length: 300
        enableMetricData: true
        metrics:
          - name: ActiveDefragHits
            statistics: [Average, Sum]

This command renders the ServiceMonitor resource valid.

$ kubectl create namespace monitoring >/dev/null 2>&1
$ helm repo add prometheus-yace-exporter https://mogaal.github.io/helm-charts/ >/dev/null 2>&1
$ helm repo update >/dev/null 2>&1
$ helm upgrade prometheus-yace-exporter prometheus-yace-exporter/prometheus-yace-exporter \
    --install \
    --namespace monitoring \
    -f ./values.yaml \
    --dry-run=client

This command renders the ServiceMonitor resource invalid.

$ helm template prometheus-yace-exporter prometheus-yace-exporter/prometheus-yace-exporter -f ./values.yaml --dry-run=client

Similarly, this cannot be used in Kustomize.

  • kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - servicemonitor.yaml
helmCharts:
  - name: prometheus-yace-exporter
    repo: https://mogaal.github.io/helm-charts/
    releaseName: prometheus-yace-exporter
    version: 0.5.0
    valuesFile: values.yaml
    namespace: monitoring
    includeCRDs: true
$ kustomize build ./ --enable-helm

Environment

Causing

The issue stems from the incorrect evaluation of the "APIVersions" set in the ServiceMonitor Chart. In my environment, monitoring.coreos.com/v1 exists but is falsely evaluated as false. As a result, this ServiceMonitor manifest is not being deployed.

You should remove the element:

and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" )
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant