Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

Pod kube-lego not starting #332

Open
IvanBiv opened this issue May 13, 2018 · 4 comments
Open

Pod kube-lego not starting #332

IvanBiv opened this issue May 13, 2018 · 4 comments

Comments

@IvanBiv
Copy link

IvanBiv commented May 13, 2018

I see problem:
Warning BackOff 59m (x2772 over 11h) kubelet, server2 Back-off restarting failed container

$ kubectl describe pods -n kube-lego
Name:           kube-lego-68f8bc79c5-5dk9v
Namespace:      kube-lego
Node:           server2/192.168.88.12
Start Time:     Sat, 12 May 2018 17:50:57 +0300
Labels:         app=kube-lego
                pod-template-hash=2494673571
Annotations:    <none>
Status:         Running
IP:             10.244.0.156
Controlled By:  ReplicaSet/kube-lego-68f8bc79c5
Containers:
  kube-lego:
    Container ID:   docker://0c734f01efc8dc0192f7d2985db730520d4a3ea5e5791d71b3b44ae8faee510a
    Image:          jetstack/kube-lego:0.1.3
    Image ID:       docker-pullable://jetstack/kube-lego@sha256:ffcc351c55f3675409232171d57c4f397a90fa5ffd4c708f2ab4513112ff6bdf
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Sun, 13 May 2018 05:19:24 +0300
      Finished:     Sun, 13 May 2018 05:19:25 +0300
    Ready:          False
    Restart Count:  140
    Readiness:      http-get http://:8080/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      LEGO_EMAIL:      <set to the key 'lego.email' of config map 'kube-lego'>  Optional: false
      LEGO_URL:        <set to the key 'lego.url' of config map 'kube-lego'>    Optional: false
      LEGO_NAMESPACE:  kube-lego (v1:metadata.namespace)
      LEGO_POD_IP:      (v1:status.podIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-v8wsw (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-v8wsw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-v8wsw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                   From              Message
  ----     ------                  ----                  ----              -------
  Normal   Started                 1h (x118 over 11h)    kubelet, server2  Started container
  Normal   Pulling                 1h (x120 over 11h)    kubelet, server2  pulling image "jetstack/kube-lego:0.1.3"
  Warning  BackOff                 59m (x2772 over 11h)  kubelet, server2  Back-off restarting failed container
  Normal   SuccessfulMountVolume   53m                   kubelet, server2  MountVolume.SetUp succeeded for volume "default-token-v8wsw"
  Warning  FailedCreatePodSandBox  52m                   kubelet, server2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-lego-68f8bc79c5-5dk9v_kube-lego" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          52m (x2 over 53m)     kubelet, server2  Pod sandbox changed, it will be killed and re-created.
  Warning  Unhealthy               51m                   kubelet, server2  Readiness probe failed: Get http://10.244.0.149:8080/healthz: dial tcp 10.244.0.149:8080: getsockopt: connection refused
  Normal   Pulled                  50m (x3 over 52m)     kubelet, server2  Successfully pulled image "jetstack/kube-lego:0.1.3"
  Normal   Created                 50m (x3 over 52m)     kubelet, server2  Created container
  Normal   Started                 50m (x3 over 52m)     kubelet, server2  Started container
  Normal   Pulling                 49m (x4 over 52m)     kubelet, server2  pulling image "jetstack/kube-lego:0.1.3"
  Warning  BackOff                 12m (x169 over 51m)   kubelet, server2  Back-off restarting failed container
  Normal   SuccessfulMountVolume   8m                    kubelet, server2  MountVolume.SetUp succeeded for volume "default-token-v8wsw"
  Warning  FailedCreatePodSandBox  7m                    kubelet, server2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-lego-68f8bc79c5-5dk9v_kube-lego" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          7m (x2 over 8m)       kubelet, server2  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed                  7m                    kubelet, server2  Failed to pull image "jetstack/kube-lego:0.1.3": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed                  7m                    kubelet, server2  Error: ErrImagePull
  Normal   BackOff                 6m                    kubelet, server2  Back-off pulling image "jetstack/kube-lego:0.1.3"
  Warning  Failed                  6m                    kubelet, server2  Error: ImagePullBackOff
  Normal   Pulling                 5m (x3 over 7m)       kubelet, server2  pulling image "jetstack/kube-lego:0.1.3"
  Normal   Pulled                  5m (x2 over 6m)       kubelet, server2  Successfully pulled image "jetstack/kube-lego:0.1.3"
  Normal   Created                 5m (x2 over 6m)       kubelet, server2  Created container
  Normal   Started                 5m (x2 over 6m)       kubelet, server2  Started container
  Warning  BackOff                 2m (x17 over 6m)      kubelet, server2  Back-off restarting failed container

$ kubectl get pods kube-lego-68f8bc79c5-5dk9v -n kube-lego -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: 2018-05-12T14:50:57Z
  generateName: kube-lego-68f8bc79c5-
  labels:
    app: kube-lego
    pod-template-hash: "2494673571"
  name: kube-lego-68f8bc79c5-5dk9v
  namespace: kube-lego
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: kube-lego-68f8bc79c5
    uid: e038a7ec-55f3-11e8-9861-4ccc6a60fcc8
  resourceVersion: "3215908"
  selfLink: /api/v1/namespaces/kube-lego/pods/kube-lego-68f8bc79c5-5dk9v
  uid: e0c7188e-55f3-11e8-9861-4ccc6a60fcc8
spec:
  containers:
  - env:
    - name: LEGO_EMAIL
      valueFrom:
        configMapKeyRef:
          key: lego.email
          name: kube-lego
    - name: LEGO_URL
      valueFrom:
        configMapKeyRef:
          key: lego.url
          name: kube-lego
    - name: LEGO_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: LEGO_POD_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    image: jetstack/kube-lego:0.1.3
    imagePullPolicy: Always
    name: kube-lego
    ports:
    - containerPort: 8080
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 8080
        scheme: HTTP
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-v8wsw
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: server2
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-v8wsw
    secret:
      defaultMode: 420
      secretName: default-token-v8wsw
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-05-12T14:50:57Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-05-12T14:50:57Z
    message: 'containers with unready status: [kube-lego]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-05-12T14:50:57Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://dba54b609e3897bad9db2d116f96ad567077e477fff0321bcdf625e1e3ff71dc
    image: jetstack/kube-lego:0.1.3
    imageID: docker-pullable://jetstack/kube-lego@sha256:ffcc351c55f3675409232171d57c4f397a90fa5ffd4c708f2ab4513112ff6bdf
    lastState:
      terminated:
        containerID: docker://dba54b609e3897bad9db2d116f96ad567077e477fff0321bcdf625e1e3ff71dc
        exitCode: 2
        finishedAt: 2018-05-13T02:24:35Z
        reason: Error
        startedAt: 2018-05-13T02:24:34Z
    name: kube-lego
    ready: false
    restartCount: 141
    state:
      waiting:
        message: Back-off 5m0s restarting failed container=kube-lego pod=kube-lego-68f8bc79c5-5dk9v_kube-lego(e0c7188e-55f3-11e8-9861-4ccc6a60fcc8)
        reason: CrashLoopBackOff
  hostIP: 192.168.88.12
  phase: Running
  podIP: 10.244.0.156
  qosClass: BestEffort
  startTime: 2018-05-12T14:50:57Z

$sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
$ kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
ivan-workstation   Ready     <none>    9d        v1.10.0
server1            Ready     <none>    9d        v1.10.2
server2            Ready     master    31d       v1.10.0

@chrissound
Copy link

Have you looked at the logs from the pod?

@IvanBiv
Copy link
Author

IvanBiv commented May 17, 2018

@chrissound not now.
Now I trying to deploy with kubespray.
Early I deployed Kubernetes with articles:
https://habrahabr.ru/post/348688/
https://habrahabr.ru/company/southbridge/blog/334846/

@chrissound
Copy link

Hmm, what I meant was that the logs would probably show what issue is causing the pod not to start.

@IvanBiv
Copy link
Author

IvanBiv commented May 17, 2018

@chrissound I understood. But now I have not this problem (this problem I did not solve) because I in process deploy cluster with Kubespray (Ansible). And I can not take log about that problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants