You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our whole kubeStateMetricsCore configuration from our usage of the Datadog Helm chart
...
kubeStateMetricsCore:
enabled: true# Don't run on main agent daemonset pods to keep resource usage stable across podsuseClusterCheckRunners: truecollectVpaMetrics: true# Common labels found on most K8S resources that we'd like as tags in DD# See https://docs.datadoghq.com/integrations/kubernetes_state_core/?tab=helm#default-labels-as-tags for labels# already collectedksm_common_label_map: &ksm_common_label_map# Our custom labelsapp: leafly_appnotifications_leafly_io_slack: notifications_route_slacknotifications_leafly_io_opsgenie: notifications_route_opsgenieprocess: leafly_processrelease: leafly_release# External labels we wanna collectk8s_app: kube_system_app # usually provided by kOps on manifests it manageslabelsAsTags:
container: *ksm_common_label_mapcronjob: *ksm_common_label_mapdaemonset: *ksm_common_label_mapdeployment: *ksm_common_label_mapendpoint: *ksm_common_label_maphorizontalpodautoscaler: *ksm_common_label_mapingress: *ksm_common_label_mapjob: *ksm_common_label_mapnode:
kops_k8s_io_instancegroup: kops_instancegroupleafly_node_util: util_nodepdb: *ksm_common_label_mappersistentvolumeclaim: *ksm_common_label_mappod: *ksm_common_label_mapreplicaset: *ksm_common_label_mapservice: *ksm_common_label_mapstatefulset: *ksm_common_label_map
...
Describe what happened:
Our KSM-based HPA metrics do not have the default labels attached to them unlike other resources (see screenshots below).
What our tags look like for a kubernetes_state.hpa. metric (kubernetes_state.hpa.max_replicas in this case)
What our tags look like for another kubernetes_state. metric that is working as expected (kubernetes_state.deployment.replicas_ready in this case)
Describe what you expected:
I expect most of the kubernetes_state.hpa.* metrics to have the following tags attached to them with no configuration on our end based on the documentation here (I can confirm the labels exist on our resources)
kube_app_name
kube_app_instance
kube_app_version
kube_app_component
kube_app_part_of
kube_app_managed_by
helm_chart
env
service
version
Steps to reproduce the issue:
Deploy Datadog helm chart to Kubernetes cluster with kubeStateMetricsCore values config noted above
Deploy HPA resource that utilizes standard K8S labels and/or Datadog Unified Service Tagging labels
Observe metrics emitted by Datadog do not have the aforementioned tags attached
I noticed the defaultLabelJoins function heredoes not include kube_horizontalpodautoscaler_labels in the returned map, which is what KSM calls the label metric it emits. So maybe that's all that needs to happen for this issue to be resolved? But not certain.
The text was updated successfully, but these errors were encountered:
Agent Environment
Deployed in Kubernetes using the official Datadog Helm chart.
Versions:
Our whole
kubeStateMetricsCore
configuration from our usage of the Datadog Helm chartDescribe what happened:
Our KSM-based HPA metrics do not have the default labels attached to them unlike other resources (see screenshots below).
What our tags look like for a
kubernetes_state.hpa.
metric (kubernetes_state.hpa.max_replicas
in this case)What our tags look like for another
kubernetes_state.
metric that is working as expected (kubernetes_state.deployment.replicas_ready
in this case)Describe what you expected:
I expect most of the
kubernetes_state.hpa.*
metrics to have the following tags attached to them with no configuration on our end based on the documentation here (I can confirm the labels exist on our resources)kube_app_name
kube_app_instance
kube_app_version
kube_app_component
kube_app_part_of
kube_app_managed_by
helm_chart
env
service
version
Steps to reproduce the issue:
kubeStateMetricsCore
values config noted aboveAdditional environment details (Operating System, Cloud provider, etc):
Potential Fix?
I noticed the
defaultLabelJoins
function here does not includekube_horizontalpodautoscaler_labels
in the returnedmap
, which is what KSM calls the label metric it emits. So maybe that's all that needs to happen for this issue to be resolved? But not certain.The text was updated successfully, but these errors were encountered: