Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2: k8sattributes metadata not working in k8s-monitoring-helm #1039

Open
cxk314 opened this issue Dec 27, 2024 · 1 comment
Open

v2: k8sattributes metadata not working in k8s-monitoring-helm #1039

cxk314 opened this issue Dec 27, 2024 · 1 comment

Comments

@cxk314
Copy link

cxk314 commented Dec 27, 2024

Hi,
I am trying to get k8sattributes added as labels to logs, metrics and traces. It doesn't appear to be working. Is it that those attributes are only added to certain telemetry (let's say metrics) and no other (logs)? Or with this config it should be added to everything? Are there any rules on what is applicable to what? Here is the config I am trying.

cluster:
  name: sb-cluster

destinations:
  - name: otlp
    type: otlp
    url: https://np-grpc.np-shared.com
    processors:
      batch:
        size: 2000
        maxSize: 2000
      k8sattributes:
        # -- Kubernetes metadata to extract and add to the attributes of the received telemetry data.
        # @section -- Processors: K8s Attributes
        metadata:
          - container.name
          - deployment.environment
          - k8s.cluster.name
          - k8s.container.name
          - k8s.replicaset.name
          - k8s.namespace.name
          - k8s.pod.name
          - k8s.deployment.name
          - service.name
          - service.namespace
          - k8s.statefulset.name
          - service.instance.id
          - k8s.daemonset.name
          - k8s.cronjob.name
          - k8s.job.name
          - k8s.node.name
          - k8s.pod.uid
          - k8s.pod.start_time
      attributes:
        actions:
          - key: TENANT_ID
            action: upsert
            value: sb-cluster
      transform:
        metrics:
          resource:
            - set(attributes["CUSTOM_TENANT_ID"], "Test_Custom_Label") where attributes["k8s.namespace.name"] == "dapr-system"
        logs:
            log:
              - set(resource.attributes["TENANT_ID"], "sb-cluster")
        traces:
          resource:
            - set(attributes["CUSTOM_TENANT_ID"], "Test_Custom_Label") where attributes["k8s.namespace.name"] == "dapr-system"
    metrics:
      # -- Whether to send metrics to the OTLP destination.
      # @section -- Telemetry
      enabled: true
    logs:
      # -- Whether to send logs to the OTLP destination.
      # @section -- Telemetry
      enabled: true
    tls:
      # -- Disables validation of the server certificate.
      # @section -- TLS
      insecureSkipVerify: true

clusterMetrics:
  enabled: true
  kube-state-metrics:
    podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
    metricsTuning:
      useDefaultAllowList: false
      includeMetrics:
      - kube_namespace_labels
      - kube_service_created
      - kube_deployment_created
      - kube_configmap_created
      - kube_ingress_created
      - kube_secret_created
      - kube_persistentvolumeclaim_info
      - kube_persistentvolume_status_phase
      - kube_pod_container_status_ready
      - kube_pod_container_status_waiting
      - kube_pod_container_status_terminated
    metricLabelsAllowlist:
      - pods=[*]
      - namespaces=[*]

clusterEvents:
  enabled: true

podLogs:
  enabled: true

nodeLogs:
  enabled: true

windows-exporter:
  deploy: false

selfReporting:
  enabled: false

# -- Application Observability.
# Requires destinations that supports metrics, logs, and traces.
# To see the valid options, please see the [Application Observability feature documentation](https://github.com/grafana/k8s-monitoring-helm/tree/main/charts/feature-application-observability).
# @default -- Disabled
# @section -- Features - Application Observability
applicationObservability:
  # -- Enable gathering Kubernetes Pod logs.
  # @section -- Features - Application Observability
  enabled: true
  receivers:
    otlp:
      grpc:
        enabled: true

alloy-metrics:
  enabled: true
  liveDebugging:
    enabled: true
  alloy:
    enableReporting: false
    stabilityLevel: experimental
  controller:
    podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}

alloy-singleton:
  enabled: true
  liveDebugging:
    enabled: true
  controller:
    podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
  alloy:
    enableReporting: false
    stabilityLevel: experimental

alloy-logs:
  enabled: true
  liveDebugging:
    enabled: true
  alloy:
    enableReporting: false
    stabilityLevel: experimental
  controller:
    podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}

# An Alloy instance for opening receivers to collect application data.
alloy-receiver:
  # -- Deploy the Alloy instance for opening receivers to collect application data.
  # @section -- Collectors - Alloy Receiver
  enabled: true
  liveDebugging:
    enabled: true
  controller:
    podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
  extraConfig: |-
      faro.receiver "integrations_app_agent_receiver" {
        server {
          listen_address           = "0.0.0.0"
          listen_port              = 8027
          cors_allowed_origins     = ["*"]
          max_allowed_payload_size = "10MiB"

          rate_limiting {
            rate = 100
          }
        }

        output {
          logs   = [otelcol.receiver.loki.otlp.receiver]
          traces = [otelcol.processor.transform.otlp.input]
        }
      }
  alloy:
    enableReporting: false
    stabilityLevel: experimental
    extraPorts:
      - name: otlp-grpc
        port: 4317
        targetPort: 4317
        protocol: TCP
      - name: faro
        port: 8027
        targetPort: 8027
        protocol: TCP

In Alloy live debug there are no additional metadata either:

Trace ID: 
Span ID: 
Flags: 0
ResourceLog #0
Resource SchemaURL: 
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-12-27 01:57:49.183621136 +0000 UTC
Timestamp: 2024-12-27 01:57:48.933702716 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(I1227 01:57:48.933621       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 1h0m0s)
Attributes:
     -> loki.attribute.labels: Str(container,job,pod,namespace)
     -> pod: Str(kube-proxy-tlgtv)
     -> namespace: Str(kube-system)
     -> job: Str(kube-system/kube-proxy)
     -> container: Str(kube-proxy)
@cxk314 cxk314 changed the title k8sattributes metadata not working in k8s-monitoring-helm v2 v2: k8sattributes metadata not working in k8s-monitoring-helm Dec 27, 2024
@petewall
Copy link
Collaborator

petewall commented Jan 2, 2025

One thing to try, the k8sattributes processor is inside the applicationObservability feature:

applicationObservability:
  processors:
    k8sattributes:
      metadata:
        - container.name
        - deployment.environment
        - k8s.cluster.name
        - k8s.container.name
        - k8s.replicaset.name
        - k8s.namespace.name
        - k8s.pod.name
        - k8s.deployment.name
        - service.name
        - service.namespace
        - k8s.statefulset.name
        - service.instance.id
        - k8s.daemonset.name
        - k8s.cronjob.name
        - k8s.job.name
        - k8s.node.name
        - k8s.pod.uid
        - k8s.pod.start_time
    attributes:
      actions:
        - key: TENANT_ID
          action: upsert
          value: sb-cluster

So, the section inside the otlp destination wont actually do anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants