Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question: how to add "automountServiceAccountToken: false" for vmagent #2006

Open
hwxy233 opened this issue Feb 21, 2025 · 6 comments
Open

Comments

@hwxy233
Copy link

hwxy233 commented Feb 21, 2025

Chart name and version, where you feel a lack of requested feature
chart: victoria-metrics-k8s-stack
version: v0.36.2

Is your feature request related to a problem? Please describe.

Dear team, I use victoria-metrics-k8s-stack helm chart to monitor AKS cluster. Recently, the azure administrator tells me a Kubernetes clusters should disable automounting API credentials problem was found in vmagent-vmks-victoria-metrics-k8s-stack-xxx pod.
To fix this, I want to add automountServiceAccountToken: false config in values.yaml at vmagent part, but it does not work.
Then I read this doc vmagentspec , I can not find automountServiceAccountToken config.
My question is how to add automountServiceAccountToken: false for vmagent, thanks.

Describe the solution you'd like
I can add "automountServiceAccountToken: false" and projected token voume for vmagent part in values.yaml like victoria-metrics-operator part:

victoria-metrics-operator:
  serviceAccount:
    automountServiceAccountToken: false
  extraVolumes:
    - name: operator
      projected:
        sources:
          - downwardAPI:
              items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
          - configMap:
              name: kube-root-ca.crt
          - serviceAccountToken:
              expirationSeconds: 7200
              path: token
  extraVolumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: operator

Describe alternatives you've considered
N/A

Additional context

The k8s cluster version is 1.27.9

The vmagent part in values.yaml is present:

vmagent:
  # -- Create VMAgent CR
  enabled: true
  # -- VMAgent annotations
  annotations: { }
  # -- Remote write configuration of VMAgent, allowed parameters defined in a [spec](https://docs.victoriametrics.com/operator/api#vmagentremotewritespec)
  additionalRemoteWrites:
    - url: https://my-single-vm-node-ouside-aks/victoria-write/

  # -- (object) Full spec for VMAgent CRD.
  #  Allowed values described [here](https://docs.victoriametrics.com/operator/api#vmagentspec)
  automountServiceAccountToken: false # this is not working
  spec:
    # automountServiceAccountToken: false # I try to add it there, but it is also not working
    port: "8429"
    # bugfix: non root
    securityContext:
      fsGroup: 65534
      runAsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
    containers:
      - name: config-reloader
        image: myacr/prometheus-operator/prometheus-config-reloader:v0.68.0
    initContainers:
      - name: config-init
        image: myacr/prometheus-operator/prometheus-config-reloader:v0.68.0
    image:
      repository: myacr/victoriametrics/vmagent
      tag: v1.102.1
      pullPolicy: IfNotPresent
    selectAllByDefault: true
    scrapeInterval: 20s
    externalLabels:
      # For multi-cluster setups it is useful to use "cluster" label to identify the metrics source.
      # For example:
      cluster: my-aks-cluster
    extraArgs:
      promscrape.streamParse: "true"
      # Do not store original labels in vmagent's memory by default. This reduces the amount of memory used by vmagent
      # but makes vmagent debugging UI less informative. See: https://docs.victoriametrics.com/vmagent/#relabel-debug
      promscrape.dropOriginalLabels: "true"
  # -- (object) VMAgent ingress configuration
  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      { }
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    labels: { }
    path: ""
    pathType: Prefix

    hosts:
      - vmagent.domain.com
    extraPaths: [ ]
@AndrewChubatiuk
Copy link
Collaborator

hey @hwxy233
unfortunately this is not supported by operator at the moment

@hwxy233
Copy link
Author

hwxy233 commented Feb 24, 2025

hi @AndrewChubatiuk ,thanks for answer, I find https://docs.victoriametrics.com/helm/victoriametrics-agent/ single vmagent can set serviceAccount.automountToken, I will try vmagent helm chart.

@tiny-pangolin
Copy link

@zekker6 I believe we ran into this problem with a different user recently was the solution to create the service account separately and specify there service account when configuring the CRD? Also since this has occurred multiple times should we have a blog post or docs that describes how to use the k8s stack in AKS?

@zekker6
Copy link
Contributor

zekker6 commented Feb 25, 2025

Hi @hwxy233 @tiny-pangolin

believe we ran into this problem with a different user recently was the solution to create the service account separately and specify there service account when configuring the CRD?

Yes, it is possible to use serviceAccountName as a part of VMAgent spec to point it to a manually created service amount with automatic mount being disabled.

lso since this has occurred multiple times should we have a blog post or docs that describes how to use the k8s stack in AKS?

We do have somewhat similar blog post, but it does not cover using enhanced security config for AKS itself. It would be great to either extend/update this blog post or to convert it into a guide as a part of our docs.

@hwxy233
Copy link
Author

hwxy233 commented Feb 27, 2025

Hi @tiny-pangolin @zekker6

Thank you for the suggestion. I have found a way to solve this problem and updated this comment.

The Azure BlockAutomountToken policy(Azure BlockAutomountToken Policy and template.yaml) has two checkpoints, one is you need to set automountServiceAccountToken: false in pod definition file, second if you don't set this in pod, policy will check every container which mountpath is /var/run/secrets/kubernetes.io/serviceaccount.

        mountServiceAccountToken(spec) {
        spec.automountServiceAccountToken == true
        }

  # if there is no automountServiceAccountToken spec, check on volumeMounts in containers. Service Account token is mounted on /var/run/secrets/kubernetes.io/serviceaccount
  # https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#serviceaccount-admission-controller
        mountServiceAccountToken(spec) {
        not has_key(spec, "automountServiceAccountToken")
        "/var/run/secrets/kubernetes.io/serviceaccount" == input_containers[_].volumeMounts[_].mountPath
        }

Solution Overview

3 Key Steps

  1. Create Custom Service Account with disabled auto-token mounting
  2. Manual Projected Volume for token & CA certificate
  3. Reconfigure Paths in vmagent and scrape configs

Below is a streamlined guide:

1. Custom Service Account Configuration

1.1 Service Account Definition

Key configuration: automountServiceAccountToken: false

# sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vmagent-service-account
  namespace: <YOUR_NAMESPACE>
automountServiceAccountToken: false  # Critical security setting

Apply with:

kubectl apply -f sa.yaml -n <YOUR_NAMESPACE>

1.2 Configure RBAC Rules

The vmagent will request the Kubernetes API, ClusterRole & ClusterRoleBinding are required for authorisation.

ClusterRole:

Remove unnecessary permissions from ClusterRole based on your monitoring needs.

# cr.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: vmagent-monitoring-role
rules:
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs: [ "get", "list", "watch" ]
  - apiGroups:
      - ""
    resources:
      - nodes
      - nodes/metrics
      - nodes/proxy
      - services
      - endpoints
      - pods
      - configmaps
      - namespaces
      - secrets
    verbs: [ "get", "list", "watch" ]
  - apiGroups:
      - networking.k8s.io
      - extensions
    resources:
      - ingresses
    verbs: [ "get", "list", "watch" ]
  - nonResourceURLs:
      - /metrics
      - /metrics/resources
    verbs: [ "get", "list", "watch" ]

ClusterRoleBinding

# crb.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vmagent-monitoring-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vmagent-monitoring-role
subjects:
  - kind: ServiceAccount
    name: vmagent-service-account
    namespace: <YOUR_NAMESPACE>

Apply with:

kubectl apply -f cr.yaml
kubectl apply -f crb.yaml

Verify binding:

kubectl describe clusterrolebinding vmagent-monitoring-role-binding

1.3 Use this serviceAccount in values.yaml

# values.yaml
vmagent:
  enabled: true
  spec:
    serviceAccountName: vmagent-service-account

2. Manual Projected Volume Setup

2.1 values.yaml Configuration

# values.yaml
vmagent:
  spec:
    volumeMounts:
      - mountPath: /etc/vmagent/apitoken
        name: kube-api-access-vmagent
        readOnly: true
    volumes:
      - name: kube-api-access-vmagent
        projected:
          defaultMode: 420
          sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                  - fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                    path: namespace

3. Reconfigure Authentication Paths

3.1 vmagent API Server Config

vmagent:
  spec:
    aPIServerConfig:
      bearerTokenFile: /etc/vmagent/apitoken/token
      host: https://kubernetes.default.svc  # Cluster-internal DNS
      tlsConfig:
        caFile: /etc/vmagent/apitoken/ca.crt

3.2 Scrape Config Adjustments

For example, kubelet and kubeApiServer, their vmScrape config will be like this:

kubelet:
  vmScrape:
    kind: VMNodeScrape
    spec:
      scheme: "https"
      honorLabels: true
      interval: "30s"
      scrapeTimeout: "5s"
      tlsConfig:
        insecureSkipVerify: true
        caFile: /etc/vmagent/apitoken/ca.crt
      bearerTokenFile: /etc/vmagent/apitoken/token

kubeApiServer:
  enabled: true
  vmScrape:
    spec:
      endpoints:
        - bearerTokenFile: /etc/vmagent/apitoken/token
          port: https
          scheme: https
          tlsConfig:
            caFile: /etc/vmagent/apitoken/ca.crt
            serverName: kubernetes

Full values.yaml Reference:

vmagent:
  enabled: true
  annotations: { }
  additionalRemoteWrites:
    - url: https://<YOUR_VM_SERVER_WRITE_API>/
  spec:
    aPIServerConfig:
      bearerTokenFile: /etc/vmagent/apitoken/token
      host: "https://kubernetes.default.svc:443"
      tlsConfig:
        caFile: /etc/vmagent/apitoken/ca.crt
    serviceAccountName: vmagent-service-account
    port: "8429"
    securityContext:
      fsGroup: 65534
      runAsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
    containers:
      - name: config-reloader
        image: <YOUR_ACR>/prometheus-operator/prometheus-config-reloader:<YOUR_TAG>
        volumeMounts:
          - mountPath: /etc/vmagent/apitoken
            name: kube-api-access-vmagent
            readOnly: true
    initContainers:
      - name: config-init
        image: <YOUR_ACR>/prometheus-operator/prometheus-config-reloader:<YOUR_TAG>
        volumeMounts:
          - mountPath: /etc/vmagent/apitoken
            name: kube-api-access-vmagent
            readOnly: true
    image:
      repository: <YOUR_ACR>/victoriametrics/vmagent
      tag: <YOUR_TAG>
      pullPolicy: IfNotPresent
    selectAllByDefault: true
    scrapeInterval: 20s
    externalLabels:
      cluster: <YOUR_CLUSTER>
    extraArgs:
      promscrape.streamParse: "true"
      promscrape.dropOriginalLabels: "true"

    volumeMounts:
      - mountPath: /etc/vmagent/apitoken
        name: kube-api-access-vmagent
        readOnly: true
    volumes:
      - name: kube-api-access-vmagent
        projected:
          defaultMode: 420
          sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                  - fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                    path: namespace
  # -- (object) VMAgent ingress configuration
  ingress:
    enabled: false
    annotations:
      { }
    labels: { }
    path: ""
    pathType: Prefix
    hosts:
      - vmagent.domain.com
    extraPaths: [ ]
    tls: [ ]

Then use helm upgrade command to update and use kubectl logs to check operator log and vmagent log.

@f41gh7
Copy link
Collaborator

f41gh7 commented Feb 28, 2025

@hwxy233 Thanks for detailed investigation. I think, operator must expose field with disabled automount option. It'll simplify service account mounts and removes need of specifying apiserver configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants