Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Fail to create cluster on rootful podman macos #1447

Open
GiGurra opened this issue Jun 2, 2024 · 0 comments
Open

[BUG] Fail to create cluster on rootful podman macos #1447

GiGurra opened this issue Jun 2, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@GiGurra
Copy link

GiGurra commented Jun 2, 2024

What did you do

Trying to create any kind of k3d cluster seems to fail for me. On latest MacOS, using the latest brew distributed podman with the latest brew version of k3d. I can run minikube on podman fine, but i'd much prefer k3d... As I use k3d on my linux machines where it works wonderfully.

k3d cluster create k3d1 --servers-memory 8G --verbose

Fails on

ERRO[0000] Failed to run tools container for cluster 'k3d1'
...
ERRO[0001] failed to gather environment information used for cluster creation: error starting existing tools node k3d-k3d1-tools: docker failed to start container for node 'k3d-k3d1-tools': Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.k3d.internal:host-gateway": host containers internal IP address is empty

I suspect it's related to this: containers/podman#21681

Full log below

DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:5.1.0-dev-c9808e7ed OSType:linux OS:fedora Arch:arm64 CgroupVersion:2 CgroupDriver:systemd Filesystem:xfs InfoName:localhost.localdomain}
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  runtime-ulimits: []
  volumes: []
hostaliases: []
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.28.8-k3s1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: 8G
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: ""
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.28.8-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory:8G AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
==========================
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:50468} Image:docker.io/rancher/k3s:v1.28.8-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory:8G AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
==========================
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-k3d1-server-0
settings:
  workerConnections: 1024
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3d1 Network:{Name:k3d-k3d1 ID: External:false IPAM:{IPPrefix:invalid Prefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0x140004c8c40 0x140004c8e00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0x140000e7580 ServerLoadBalancer:0x14000408c00 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory:8G AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3d1'
INFO[0000] Created image volume k3d-k3d1-images
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] [Docker] DockerHost: '' ()
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] [autofix cgroupsv2] cgroupVersion: 2
DEBU[0000] Created container k3d-k3d1-tools (ID: a69c2b3246ddb33bb6b877a531375377d9e09c9dfee7f7c7edc7ff65f76fbc30)
DEBU[0000] Node k3d-k3d1-tools Start Time: 2024-06-02 21:39:58.998775 +0200 CEST m=+0.110731959
INFO[0000] Starting node 'k3d-k3d1-tools'
ERRO[0000] Failed to run tools container for cluster 'k3d1'
INFO[0001] Creating node 'k3d-k3d1-server-0'
DEBU[0001] Created container k3d-k3d1-server-0 (ID: a9893182bc5d3752e3f19bd34b175cc48cf5ca681547228be43bbe94a4e0ba00)
DEBU[0001] Created node 'k3d-k3d1-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3d1-serverlb'
DEBU[0001] Created container k3d-k3d1-serverlb (ID: 149e60a9a03bca43f0d5762e3ff94a03d6c2b7c7ff385841865cd527744b858e)
DEBU[0001] Created loadbalancer 'k3d-k3d1-serverlb'
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
INFO[0001] Using the k3d-tools node to gather environment information
DEBU[0001] no netlabel present on container /k3d-k3d1-tools
DEBU[0001] failed to get IP for container /k3d-k3d1-tools as we couldn't find the cluster network
INFO[0001] Starting existing tools node k3d-k3d1-tools...
INFO[0001] Starting node 'k3d-k3d1-tools'
ERRO[0001] failed to gather environment information used for cluster creation: error starting existing tools node k3d-k3d1-tools: docker failed to start container for node 'k3d-k3d1-tools': Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.k3d.internal:host-gateway": host containers internal IP address is empty
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3d1'
DEBU[0001] no netlabel present on container /k3d-k3d1-tools
DEBU[0001] failed to get IP for container /k3d-k3d1-tools as we couldn't find the cluster network
DEBU[0001] Cluster Details: &{Name:k3d1 Network:{Name:k3d-k3d1 ID:ad495d5c7f6d0b834214b0fd39edcc514face25a5323e42f7b1258cb46d5569b External:false IPAM:{IPPrefix:10.89.0.0/24 IPsUsed:[] Managed:false} Members:[]} Token:IcvYZCqzFVzUTkNYiIGp Nodes:[0x140004c8c40 0x140004c8e00 0x14000502a80] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0x140000e7580 ServerLoadBalancer:0x14000408c00 ImageVolume:k3d-k3d1-images Volumes:[k3d-k3d1-images]}
DEBU[0001] Deleting node k3d-k3d1-serverlb ...
DEBU[0001] Deleting node k3d-k3d1-server-0 ...
DEBU[0001] Cleaning fake files folder from k3d config dir for this node...
DEBU[0001] Deleting node k3d-k3d1-tools ...
INFO[0001] Deleting cluster network 'k3d-k3d1'
INFO[0001] Deleting 1 attached volumes...
DEBU[0001] Deleting volume k3d-k3d1-images...
FATA[0001] Cluster creation FAILED, all changes have been rolled back!```


## Which OS & Architecture

MacOS, latest

k3d runtime-info
arch: arm64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: xfs
infoname: localhost.localdomain
name: docker
os: fedora
ostype: linux
version: 5.1.0-dev-c9808e7ed```

Which version of k3d

  • output of k3d version
k3d --version
k3d version v5.6.3
k3s version v1.28.8-k3s1 (default)```

## Which version of docker

podman --version
podman version 5.1.0```

podman info
host:
  arch: arm64
  buildahVersion: 1.36.0-dev
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc40.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 99.71
    systemPercent: 0.16
    userPercent: 0.13
  cpus: 10
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: coreos
    version: "40"
  eventLogger: journald
  freeLocks: 2046
  hostname: localhost.localdomain
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.8.8-300.fc40.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 23918878720
  memTotal: 24544239616
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-1.20240506173313423293.main.51.g069ab45.fc40.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.11.0-dev
    package: netavark-1.10.1-1.20240513124445753694.main.112.gd982b8b.fc40.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.11.0-dev
  ociRuntime:
    name: crun
    package: crun-1.14.4-1.20240424212458225367.main.39.gd075e53.fc40.aarch64
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: 320753b75c4e30085176ffc515936df286edbde2
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240426.gd03c4e2-1.fc40.aarch64
    version: |
      pasta 0^20240426.gd03c4e2-1.fc40.aarch64-pasta
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.aarch64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 0h 5m 19.00s
  variant: v8
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 428749340672
  graphRootUsed: 153997012992
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 331
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.1.0-dev-c9808e7ed
  Built: 1715558400
  BuiltTime: Mon May 13 02:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/arm64
  Version: 5.1.0-dev-c9808e7ed```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant