This repository has been archived by the owner on Mar 23, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 20
make OCS exits successfully but leaves an increasing number of pods in Terminating state #161
Comments
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
313
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
312
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
313
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
314
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
316
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
318
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
317
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
319
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
319
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
319
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
320
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
320
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage get pods | grep Terminating | wc -l
340 |
[kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage describe pods/rook-ceph-drain-canary-rhhi-node-master-0-545fff7849-bgrzv
Name: rook-ceph-drain-canary-rhhi-node-master-0-545fff7849-bgrzv
Namespace: openshift-storage
Priority: 0
PriorityClassName: <none>
Node: rhhi-node-master-0/
Labels: app=rook-ceph-drain-canary
kubernetes.io/hostname=rhhi-node-master-0
node_name=rhhi-node-master-0
pod-template-hash=545fff7849
Annotations: openshift.io/scc: restricted
Status: Terminating (lasts 75s)
Termination Grace Period: 30s
IP:
Controlled By: ReplicaSet/rook-ceph-drain-canary-rhhi-node-master-0-545fff7849
Containers:
busybox:
Image: busybox
Port: <none>
Host Port: <none>
Command:
bin/sh
Args:
-c
sleep infinity
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zgjmn (ro)
Conditions:
Type Status
PodScheduled True
Volumes:
default-token-zgjmn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zgjmn
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/hostname=rhhi-node-master-0
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
node.ocs.openshift.io/storage=true:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 106s default-scheduler Successfully assigned openshift-storage/rook-ceph-drain-canary-rhhi-node-master-0-545fff7849-bgrzv to rhhi-node-master-0
Warning FailedCreatePodSandBox 92s kubelet, rhhi-node-master-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rook-ceph-drain-canary-rhhi-node-master-0-545fff7849-bgrzv_openshift-storage_86c8ee36-df0e-11e9-88fb-525400322df2_0(e73f15d4f5b3957e00c13d9140454c1551d618f482b4134df68e8be01b723905): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/111588/ns/net": no such file or directory |
rook-ceph-tools-747987675d-ttqgv pod is in CreateContainerError state: [kni@rhhi-node-worker-0 install-scripts]$ oc -n openshift-storage describe pods/rook-ceph-tools-747987675d-ttqgv
Name: rook-ceph-tools-747987675d-ttqgv
Namespace: openshift-storage
Priority: 0
PriorityClassName: <none>
Node: rhhi-node-master-2/192.168.123.111
Start Time: Tue, 24 Sep 2019 16:52:25 -0400
Labels: app=rook-ceph-tools
pod-template-hash=747987675d
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.129.0.78"
],
"default": true,
"dns": {}
}]
openshift.io/scc: rook-ceph
Status: Pending
IP: 10.129.0.78
Controlled By: ReplicaSet/rook-ceph-tools-747987675d
Containers:
rook-ceph-tools:
Container ID:
Image: ceph/ceph:v14.2.4-20190917
Image ID:
Port: <none>
Host Port: <none>
Command:
/tini
Args:
-g
--
/usr/local/bin/toolbox.sh
State: Waiting
Reason: CreateContainerError
Ready: False
Restart Count: 0
Environment:
ROOK_ADMIN_SECRET: <set to the key 'admin-secret' in secret 'rook-ceph-mon'> Optional: false
Mounts:
/dev from dev (rw)
/etc/rook from mon-endpoint-volume (rw)
/lib/modules from libmodules (rw)
/sys/bus from sysbus (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zgjmn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
sysbus:
Type: HostPath (bare host directory volume)
Path: /sys/bus
HostPathType:
libmodules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
mon-endpoint-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rook-ceph-mon-endpoints
Optional: false
default-token-zgjmn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zgjmn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned openshift-storage/rook-ceph-tools-747987675d-ttqgv to rhhi-node-master-2
Warning FailedMount 13m (x7 over 13m) kubelet, rhhi-node-master-2 MountVolume.SetUp failed for volume "mon-endpoint-volume" : configmaps "rook-ceph-mon-endpoints" not found
Normal Pulling 12m kubelet, rhhi-node-master-2 Pulling image "ceph/ceph:v14.2.4-20190917"
Normal Pulled 12m kubelet, rhhi-node-master-2 Successfully pulled image "ceph/ceph:v14.2.4-20190917"
Warning Failed 11m (x8 over 12m) kubelet, rhhi-node-master-2 Error: container create failed: container_linux.go:345: starting container process caused "exec: \"/tini\": stat /tini: no such file or directory"
Normal Pulled 4m (x38 over 12m) kubelet, rhhi-node-master-2 Container image "ceph/ceph:v14.2.4-20190917" already present on machine
|
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
make OCS exits successfully but leaves a bunch of pods in Terminating state:
The text was updated successfully, but these errors were encountered: