k3s local access issue #888
Answered
by
mudler
massimogallina
asked this question in
Q&A
-
I have installed the image #cloud-config
# Define the user accounts on the node.
users:
- name: "kairos"
passwd: "kairos"
ssh_authorized_keys:
- "ssh-rsa AAA..."
# Enable K3s on the node.
k3s:
enabled: true
args:
- --disable=traefik,servicelb
stages:
initramfs:
- files:
- path: /var/lib/connman/default.config
permission: 0644
content: |
[service_eth0]
Type = ethernet
IPv4 = 192.168.11.55/24/192.168.11.250
Nameservers = 192.168.1.181 8.8.8.8
boot:
- name: "Setup dns"
dns:
nameservers:
- 192.168.1.181
- 8.8.8.8
path: "/etc/resolv.conf"
- name: "Setup hostname"
hostname: "test" After the installation and initial configuration, I access the node in SSH and run
Am I missing something? |
Beta Was this translation helpful? Give feedback.
Answered by
mudler
Feb 14, 2023
Replies: 1 comment 2 replies
-
Seems export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
sudo -E kubectl get nodes or better, just run as root: sudo su - # become root
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes |
Beta Was this translation helpful? Give feedback.
2 replies
Answer selected by
massimogallina
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Seems
kubectl
can't get theKUBECONFIG
file, from your commands could be that it is out of scope, try with:export KUBECONFIG=/etc/rancher/k3s/k3s.yaml sudo -E kubectl get nodes
or better, just run as root: