Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [CLI] Rancher CLI does not store more than 1 cluster token at a time per host #46997

Open
idogada-akamai opened this issue Sep 9, 2024 · 8 comments
Labels
kind/bug Issues that are defects reported by users or that we know have reached a real release

Comments

@idogada-akamai
Copy link

Rancher Server Setup

  • Rancher version: 2.9.1
  • Installation option (Docker install/Helm Chart): Helm
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): AKS 1.30

Information about the Cluster

  • Kubernetes version: 1.30
  • Cluster Type (Local/Downstream): Local, and downstream
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): imported

User Information

  • What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom) Admin

Describe the bug
We have multiple clusters connected to the same Rancher instance, and we've set the option kubeconfig-generate-token=false so now users are using the AzureAD SSO integration to get the token.

Each time a user is connecting to a different cluster, the CLI overrides the existing cluster token (per host), instead of just adding it to the cli2.json file.

Here's an example of what's happening, given the following kubeconfig file

apiVersion: v1
kind: Config
clusters:
  - name: "hashicorp-vault-dev"
    cluster:
      server: "https://rancher.mydomain.com/k8s/clusters/c-nmx9g"
  - name: "local"
    cluster:
      server: "https://rancher.mydomain.com/k8s/clusters/local"
users:
  - name: "hashicorp-vault-dev"
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1beta1
        args:
          - token
          - --server=rancher.mydomain.com
          - --user=hashicorp-vault-dev
          - --auth-provider=azureADProvider
        command: rancher
  - name: "local"
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1beta1
        args:
          - token
          - --server=rancher.mydomain.com
          - --user=local
          - --auth-provider=azureADProvider
        command: rancher
contexts:
  - name: "hashicorp-vault-dev"
    context:
      user: "hashicorp-vault-dev"
      cluster: "hashicorp-vault-dev"
  - name: "local"
    context:
      user: "local"
      cluster: "local"
current-context: "local"
☸ hashicorp-vault-dev ~/kubeconfigs
❯ kubectx hashicorp-vault-dev
Switched to context "hashicorp-vault-dev".

☸ hashicorp-vault-dev ~/kubeconfigs
❯ k get pods
INFO[0000] Saving config to /Users/igada/.rancher/cli2.json
https://rancher.mydomain.com/v3-public/authProviders

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FQ59BQ99J to authenticate.

INFO[0027] Saving config to /Users/igada/.rancher/cli2.json
No resources found in default namespace.

☸ hashicorp-vault-dev ~/kubeconfigs
❯ jq '.' ~/.rancher/cli2.json
{
  "Servers": {
    "rancher.mydomain.com": {
      "accessKey": "",
      "secretKey": "",
      "tokenKey": "",
      "url": "",
      "project": "",
      "cacert": "",
      "kubeCredentials": {
        "hashicorp-vault-dev_": {
          "kind": "ExecCredential",
          "apiVersion": "client.authentication.k8s.io/v1beta1",
          "spec": {},
          "status": {
            "expirationTimestamp": "2024-10-09T09:29:15Z",
            "token": "XXXXXXXXXX"
          }
        }
      },
      "kubeConfigs": null
    }
  },
  "CurrentServer": ""
}

☸ hashicorp-vault-dev ~/kubeconfigs
❯ kubectx local
Switched to context "local".

☸ local ~/kubeconfigs
❯ k get pods
https://rancher.mydomain.com/v3-public/authProviders

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code ABHDYFK6Y to authenticate.

INFO[0017] Saving config to /Users/igada/.rancher/cli2.json
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-deployment-6f8fdbf5f4-pd8sl    1/1     Running   0          7d2h
nginx-deployment-6f8fdbf5f4-wgm72    1/1     Running   0          7d2h
reloader-reloader-5f44c4cb9c-k6kkr   1/1     Running   0          7d2h

☸ local ~/kubeconfigs
❯ jq '.' ~/.rancher/cli2.json
{
  "Servers": {
    "rancher.mydomain.com": {
      "accessKey": "",
      "secretKey": "",
      "tokenKey": "",
      "url": "",
      "project": "",
      "cacert": "",
      "kubeCredentials": {
        "local_": {
          "kind": "ExecCredential",
          "apiVersion": "client.authentication.k8s.io/v1beta1",
          "spec": {},
          "status": {
            "expirationTimestamp": "2024-10-09T09:29:55Z",
            "token": "XXXXXXX"
          }
        }
      },
      "kubeConfigs": null
    }
  },
  "CurrentServer": ""
}

☸ local ~/kubeconfigs
❯ kubectx hashicorp-vault-dev
Switched to context "hashicorp-vault-dev".

☸ hashicorp-vault-dev ~/kubeconfigs
❯ k get pods
https://rancher.mydomain.com/v3-public/authProviders

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code ALXWCPV2S to authenticate.

^C

As you see each time I switch context the credentials are being overridden and no re-used.

To Reproduce

  • Connect 2 clusters to Rancher
  • Enable AzureAD SSO
  • Set kubeconfig-generate-token=false
  • Connect to cluster 1
  • Switch context to cluster 2
  • Switch back to cluster 1 and notice that the CLI asks you to login again

Result
Credentials are being overridden every time you switch context.

Expected Result
When we connect to a new cluster, the credentials should be appended and not override the existing ones.

@idogada-akamai idogada-akamai added the kind/bug Issues that are defects reported by users or that we know have reached a real release label Sep 9, 2024
@Aransh
Copy link

Aransh commented Sep 9, 2024

Seeing the same

@EliranTurgeman
Copy link

same things here

@idogada-akamai idogada-akamai changed the title [BUG] Rancher CLI does store more than 1 cluster token at a time per host [BUG] Rancher CLI does not store more than 1 cluster token at a time per host Sep 9, 2024
@nsadehh
Copy link

nsadehh commented Sep 10, 2024

same here

@idogada-akamai
Copy link
Author

@enrichman
Can we get some attention on this?
This makes the feature completely unusable

@enrichman
Copy link
Contributor

Hi @idogada-akamai, thanks, I'll try to prioritize this. 👍

@asapir1
Copy link

asapir1 commented Sep 18, 2024

is there any solution yet?
I have similar problem.

@idogada-akamai idogada-akamai changed the title [BUG] Rancher CLI does not store more than 1 cluster token at a time per host [BUG][CLI] Rancher CLI does not store more than 1 cluster token at a time per host Sep 18, 2024
@idogada-akamai idogada-akamai changed the title [BUG][CLI] Rancher CLI does not store more than 1 cluster token at a time per host [BUG] [CLI] Rancher CLI does not store more than 1 cluster token at a time per host Sep 18, 2024
@idogada-akamai
Copy link
Author

@enrichman
I have made a PR in the CLI repo, this seems to fix the issue. Let me know if it's acceptable
rancher/cli#397

@lkalisch
Copy link

I'm facing the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues that are defects reported by users or that we know have reached a real release
Projects
None yet
Development

No branches or pull requests

7 participants