You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the second app mount pod is stuck in a Init:0/1 state with the Warning:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedAttachVolume 2m27s attachdetach-controller Multi-Attach error for volume "pvc-d744c8d6-145b-4006-9ba8-bf42fd4ad632" Volume is already used by pod(s) node-12-juicefs-pv-crxnpz
Warning FailedMount 24s kubelet Unable to attach or mount volumes: unmounted volumes=[cachedir-pvc-0], unattached volumes=[jfs-root-dir kube-api-access-zh5j2 cachedir-pvc-0 jfs-dir updatedb]: timed out waiting for the condition
Anything else we need to know?
How to implement each JuiceFS app mount pod with its own RBD cache block?
Any suggestions ?
Environment:
JuiceFS CSI Driver version (which image tag did your CSI Driver use):
v0.18.1
Kubernetes version (e.g. kubectl version):
v1.23.13
Object storage (cloud provider and region):
ceph 14
Metadata engine info (version, cloud provider managed or self maintained):
self maintained。
Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
Others:
The text was updated successfully, but these errors were encountered:
juicefs-app1 and juicefs-app2 run on the same node or different nodes? If different nodes, it's very likely that the issue caused by the accessModes RWO.
juicefs-app1 and juicefs-app2 run on the same node or different nodes? If different nodes, it's very likely that the issue caused by the accessModes RWO.
@showjason run on the different nodes. How to solve the use of block device PVC, I see examples are all cloud vendor block devices.
@chenmiao1991 as I know, ceph rbd doesn't support RWX, but support ROX. Maybe, dedicated-cache-cluster is one way to address your issue. Or you can use the NFS instead of block storage. @zxh326 sorry, do you have any better ideas?
@showjason Using the juicefs distributed file system is to replace file systems like nfs, which brings us back again.
@showjason@zxh326 may juicefs-csi-node DaemonSet can automatically mount their own rbd as cache, and other applications can share the rbd cache. Do not use the hostpath mode, as it is inconvenient for batch creation and deletion of rbd.
What happened:
when I try use-pvc-as-cache-path, the second mount pod can not Running.
What you expected to happen:
Each JuiceFS app mount pod with its own RBD cache block and is currently running.
How to reproduce it (as minimally and precisely as possible):
Init:0/1
state with the Warning:Anything else we need to know?
How to implement each JuiceFS app mount pod with its own RBD cache block?
Any suggestions ?
Environment:
JuiceFS CSI Driver version (which image tag did your CSI Driver use):
v0.18.1
Kubernetes version (e.g.
kubectl version
):v1.23.13
Object storage (cloud provider and region):
ceph 14
Metadata engine info (version, cloud provider managed or self maintained):
self maintained。
Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
Others:
The text was updated successfully, but these errors were encountered: