You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When OTel Collector is run as a standalone pod (not sidecar), k8s-related labels like pod and namespace names are missing in otelcol_* metrics of internal telemetry.
Steps to reproduce
We have OTel Collector deployed in two scenarios:
In sidecar scenario, we have OpenTelemetry Collector run as a sidecar container in the app pod, and export the metrics to the 'central' OTel Collector.
In standalone scenario, we have the 'central' OpenTelemetry Collector running as a standalone pod, receiving the metrics from the app sidecar collectors from multiple namespaces.
What did you expect to see?
In sidecar scenario the metrics do get labels like k8s_namespace_name and k8s_pod_name, which we need to specify the metrics in Grafana dashboards.
What did you see instead?
Out of the box, in standalone scenario no k8s-related labels are being added.
In sidecar scenario, the otel metrics labels we get look like this, containing k8s_pod_name and k8s_namespace_name:
What version did you use?
We run OTel Collectors in our k8s clusters, installed as Helm charts.
Chart version: opentelemetry-operator:0.74.3
OTel image version override: 0.115.1 (tried on 0.114.0 too)
What config did you use?
In sidecar scenario, we have OpenTelemetry Collector run as a sidecar container in the app pod, and export the metrics to the 'central' OTel collector. Simplified config looks like this:
config:
receivers:
otlp/unix:
protocols:
grpc:
transport: unix
endpoint: "@otlp.sock"
exporters:
otlp:
endpoint: workloads-collector.opentelemetry.svc.cluster.local:4317 # metrics received from app are being sent to 'central' collector
tls:
insecure: true
processors:
k8sattributes:
passthrough: false
extract:
metadata:
- k8s.deployment.name
- k8s.pod.start_time
labels:
- tag_name: component
key: component
from: pod
- tag_name: app
key: app
from: pod
- tag_name: environment
key: env
from: pod
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
service:
telemetry:
metrics:
level: detailed
readers:
- periodic:
interval: 10000
exporter:
otlp:
protocol: grpc/protobuf
endpoint: workloads-collector.opentelemetry.svc.cluster.local:4317 # collector metrics are sent to 'central' collector too
#endpoint: unix:otlp.sock # specifying unix socket does not seem to work due to enforced string validation, so the sidecar collector isn't able to send metrics to itself
pipelines:
metrics:
receivers: [otlp/unix]
processors: [k8sattributes]
exporters: [otlp]
In standalone scenario, we have the 'central' OpenTelemetry Collector running as a standalone central collector, receiving the metrics from the app sidecars from multiple namespaces.
Simplifed config looks like this:
Environment
All workloads are run in Azure k8s/AKS v1.30
Additional context
An additional question on this topic, what is the suggested way of debugging the (re)labeling in OTel Collectors? Something like the step-by-step relabeling details similar to what Prometheus UI provides maybe?
The text was updated successfully, but these errors were encountered:
Describe the bug
When OTel Collector is run as a standalone pod (not sidecar), k8s-related labels like pod and namespace names are missing in
otelcol_*
metrics of internal telemetry.Steps to reproduce
We have OTel Collector deployed in two scenarios:
In sidecar scenario, we have OpenTelemetry Collector run as a sidecar container in the app pod, and export the metrics to the 'central' OTel Collector.
In standalone scenario, we have the 'central' OpenTelemetry Collector running as a standalone pod, receiving the metrics from the app sidecar collectors from multiple namespaces.
What did you expect to see?
In sidecar scenario the metrics do get labels like
k8s_namespace_name
andk8s_pod_name
, which we need to specify the metrics in Grafana dashboards.What did you see instead?
Out of the box, in standalone scenario no k8s-related labels are being added.
In sidecar scenario, the otel metrics labels we get look like this, containing
k8s_pod_name
andk8s_namespace_name
:In standalone scenario, the
otelcol_*
metrics labels we get are as follows:What version did you use?
We run OTel Collectors in our k8s clusters, installed as Helm charts.
Chart version:
opentelemetry-operator:0.74.3
OTel image version override:
0.115.1
(tried on0.114.0
too)What config did you use?
In sidecar scenario, we have OpenTelemetry Collector run as a sidecar container in the app pod, and export the metrics to the 'central' OTel collector. Simplified config looks like this:
In standalone scenario, we have the 'central' OpenTelemetry Collector running as a standalone central collector, receiving the metrics from the app sidecars from multiple namespaces.
Simplifed config looks like this:
Environment
All workloads are run in Azure k8s/AKS v1.30
Additional context
An additional question on this topic, what is the suggested way of debugging the (re)labeling in OTel Collectors? Something like the step-by-step relabeling details similar to what Prometheus UI provides maybe?
The text was updated successfully, but these errors were encountered: