Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/multus-cni] failed to get context for the kubeconfig #30606

Open
brightdroid opened this issue Nov 24, 2024 · 9 comments · May be fixed by #31045
Open

[bitnami/multus-cni] failed to get context for the kubeconfig #30606

brightdroid opened this issue Nov 24, 2024 · 9 comments · May be fixed by #31045
Assignees
Labels
multus-cni tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@brightdroid
Copy link

Name and Version

bitnamicharts/multus-cni:2.1.19

What architecture are you using?

amd64

What steps will reproduce the bug?

  • installed k3s v1.29.10+k3s1
  • installed multus with below values
  • start netshot debug pod: kubectl netshoot run tmp-shell
  • inspect pod "tmp-shell" (which did not start)

Are you using any custom parameters or values?

fullnameOverride: multus
hostCNIBinDir: /var/lib/rancher/k3s/data/cni/
hostCNINetDir: /var/lib/rancher/k3s/agent/etc/cni/net.d

What is the expected behavior?

No response

What do you see instead?

Logs from "tmp-shell" pod
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3574ecbded3a47def3133b1d8a783035a1a63f6d95ba4c4a3845c5cf24c78614": plugin type="multus" failed (add): Multus: error getting k8s client: GetK8sClient: failed to get context for the kubeconfig /etc/cni/net.d/multus.d/multus.kubeconfig: stat /etc/cni/net.d/multus.d/multus.kubeconfig: no such file or directory

Additional information

Logs from multus pod:

multus-wwvcj multus-cni kubeconfig is created in /bitnami/multus-cni/host/etc/cni/net.d/multus.d/multus.kubeconfig                                                                                                                                           
multus-wwvcj multus-cni kubeconfig file is created.                                                                                                                                                                                                          
multus-wwvcj multus-cni master capabilities is get from conflist                                                                                                                                                                                             
multus-wwvcj multus-cni multus config file is created.                                                                                                                                                                                                       
multus-wwvcj generate-kubeconfig multus multus copy succeeded!                                                                                                                                                                                               
@brightdroid brightdroid added the tech-issues The user has a technical issue about an application label Nov 24, 2024
@github-actions github-actions bot added the triage Triage is needed label Nov 24, 2024
@javsalgar javsalgar changed the title multus: failed to get context for the kubeconfig [bitnami/multus-cni] failed to get context for the kubeconfig Nov 25, 2024
@javsalgar
Copy link
Contributor

Hi,

This is strange, because looking at the logs the kubeconfig should be there. Could you try entering the container with kubectl exec and verify that it created the kubeconfig file? If so, maybe the hostCNINetDir is incorrect

@brightdroid
Copy link
Author

I think the multus daemonset use the wrong paths.

  • the multus pods created the kubeconfig on the hosts in /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig (as expected)
  • the multus pods have the host path /var/lib/rancher/k3s/agent/etc/cni/net.d mounted at /bitnami/multus-cni/host/etc/cni/net.d

So the multus pods should use the path /bitnami/multus-cni/host/etc/cni/net.d/multus.d/multus.kubeconfig.

@javsalgar
Copy link
Contributor

Could you try changing the path in the YAML to confirm this? If so, as you spotted the issue, would you like to submit a PR?

@brightdroid
Copy link
Author

I think I found the isse, the file (on the host) /var/lib/rancher/k3s/agent/etc/cni/net.d/00-multus.conflist contain the wrong path to the kubeconfig:

{
    "cniVersion": "1.0.0",
    "name": "multus-cni-network",
    "plugins": [ {
        "type": "multus",
        "capabilities": {"bandwidth":true,"portMappings":true},
        "logLevel": "verbose",
        "cniConf": "/bitnami/multus-cni/host/etc/cni/net.d",
        "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig",
        "delegates": [
            {"cniVersion":"1.0.0","name":"cbr0","plugins":[{"delegate":{"forceAddress":true,"hairpinMode":true,"isDefaultGateway":true},"type":"flannel"},{"capabilities":{"portMappings":true},"type":"portmap"},{"capabilities":{"bandwidth":true},"type":"bandwidth"}]}
        ]
    }]
}

The correct path is below cniConf, so it should be: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig

@javsalgar
Copy link
Contributor

Let's try something, could you try deploying using the args value (so it does not use the default ones) so you can see which exact parameter should be?

@brightdroid
Copy link
Author

I fixed the issue by adding the following arg to args:

--multus-kubeconfig-file-host=/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig

This could be set by default from hostCNINetDir, I think?!

@javsalgar
Copy link
Contributor

Hi,

I'd say so. Would you like to submit a PR adding that parameter?

brightdroid added a commit to brightdroid/bitnami-charts that referenced this issue Dec 15, 2024
@brightdroid brightdroid linked a pull request Dec 15, 2024 that will close this issue
4 tasks
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Dec 18, 2024
@brightdroid
Copy link
Author

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

Just to pause the bot, I already added a pull request

@carrodher carrodher removed the stale 15 days without activity label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multus-cni tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants