Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot import existing ZVOL with XFS file system #550

Closed
b1r3k opened this issue Jun 25, 2024 · 4 comments
Closed

Cannot import existing ZVOL with XFS file system #550

b1r3k opened this issue Jun 25, 2024 · 4 comments
Labels
bug Something isn't working. need more info More information is needed from user, need reproduction steps

Comments

@b1r3k
Copy link

b1r3k commented Jun 25, 2024

What steps did you take and what happened:

I'm following doc [zfs-localpv/docs/import-existing-volume.md at develop · openebs/zfs-localpv](https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md) but my ZFS volume has XFS filesystem on top of it.

I've created PV:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-cockroachdb-data-2
spec:
  capacity:
    storage: 333Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv-imported
  csi:
    driver: zfs.csi.openebs.io
    fsType: "xfs"
    volumeAttributes:
      openebs.io/poolname: granary # change the pool name accordingly
    volumeHandle: vm-102-disk-0 # This should be same as the zfs volume name
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - talos-qsn-ail

but with fsType: "xfs" as you can see. Unfortunately I'm getting error: kubelet MountVolume.SetUp failed for volume "pv-cockroachdb-data-2" : rpc error: code = Internal desc = zfsvolumes.zfs.openebs.io "vm-102-disk-0" not found

What did you expect to happen:

I'd like to mount ZFS volume with XFS file system

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin

kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin

  • kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin

kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin

  • kubectl get pods -n openebs
NAME                                                        READY   STATUS    RESTARTS         AGE
openebs-zfslocalpv-zfs-localpv-controller-c5b7f6b49-frn22   5/5     Running   29 (6h21m ago)   9d
openebs-zfslocalpv-zfs-localpv-controller-c5b7f6b49-xmqn9   5/5     Running   11 (6h21m ago)   47h
openebs-zfslocalpv-zfs-localpv-node-hzpxx                   2/2     Running   7 (6h18m ago)    9d
openebs-zfslocalpv-zfs-localpv-node-jw8pg                   2/2     Running   1 (8d ago)       9d
  • kubectl get zv -A -o yaml
apiVersion: v1
items:
- apiVersion: zfs.openebs.io/v1
  kind: ZFSVolume
  metadata:
    creationTimestamp: "2024-06-20T16:56:17Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: jester
    name: pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430
    namespace: openebs-localpv-zfs
    resourceVersion: "11923971"
    uid: 56f828ea-0dc0-4228-b4d7-6c5700e70d4a
  spec:
    capacity: "322122547200"
    fsType: zfs
    ownerNodeID: jester
    poolName: zfspv-pool
    volumeType: DATASET
  status:
    state: Ready
- apiVersion: zfs.openebs.io/v1
  kind: ZFSVolume
  metadata:
    creationTimestamp: "2024-06-25T09:05:33Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 4
    labels:
      kubernetes.io/nodename: jester
    name: pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3
    namespace: openebs-localpv-zfs
    resourceVersion: "13896769"
    uid: 6a46561a-057d-4926-9cdb-22f614f170a5
  spec:
    capacity: "537944653824"
    fsType: zfs
    ownerNodeID: jester
    poolName: zfspv-pool
    volumeType: DATASET
  status:
    state: Ready
kind: List
metadata:
  resourceVersion: ""

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

ZVOL volume I want to import sits on talos-qsn-ail node

root@zfs-utils:/# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
granary                    3.06T   470G   160K  /granary
granary/backups             101G   470G   101G  /granary/backups
granary/docker-registry     128K   470G   128K  /granary/docker-registry
granary/home               1.12T   470G  1.12T  /granary/home
granary/private-backups     635G   470G   635G  /granary/private-backups
granary/subvol-100-disk-0   720M  7.30G   719M  /granary/subvol-100-disk-0
granary/vm-101-disk-0       658M   470G   658M  -
granary/vm-102-disk-0       459G   925G  4.39G  -
granary/vm-102-disk-1      61.9G   514G  18.2G  -
granary/vm-102-disk-2       724G  1.13T  41.7G  -
granary/vm-102-disk-3       629M   470G   629M  -

Environment:

  • LocalPV-ZFS version: 2.5.1 Helm chart
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.26.2
  • Cloud provider or hardware configuration: baremetal
  • OS (e.g. from /etc/os-release): Talos
@Abhinandan-Purkait
Copy link
Member

@b1r3k The volumeHandle: vm-102-disk-0 that you have provided in the persistent volume spec.csi seems to be incorrect right? There is no vm-102-disk-0 named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see is pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3

@b1r3k
Copy link
Author

b1r3k commented Jul 2, 2024

@b1r3k The volumeHandle: vm-102-disk-0 that you have provided in the persistent volume spec.csi seems to be incorrect right? There is no vm-102-disk-0 named zv in the cluster(atleast from the zvs that you have pasted). The only ones I see is pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3

that's surprising since volume can be seen using zfs list. pvc-1df0d67f-1574-4b1b-87c3-a6c7dce19430 and pvc-3999ecbd-450c-4a8e-96a9-306c223c36e3 are on different node (jester) where openebs-zfs created volumes from scratch. I'm trying to import ZVOL residing on node talos-qsn-ail - this ZVOL was created outside of openebs-zfs system. Since https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md describes importing ZVOL I assumed such volume does not have to be created by openebs-zfs.

root@zfs-utils:/# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
granary                    3.06T   470G   160K  /granary
granary/backups             101G   470G   101G  /granary/backups
granary/docker-registry     128K   470G   128K  /granary/docker-registry
granary/home               1.12T   470G  1.12T  /granary/home
granary/private-backups     635G   470G   635G  /granary/private-backups
granary/subvol-100-disk-0   720M  7.30G   719M  /granary/subvol-100-disk-0
granary/vm-101-disk-0       658M   470G   658M  -
granary/vm-102-disk-0       459G   925G  4.39G  -
granary/vm-102-disk-1      61.9G   514G  18.2G  -
granary/vm-102-disk-2       724G  1.13T  41.7G  -
granary/vm-102-disk-3       629M   470G   629M  -

@Abhinandan-Purkait
Copy link
Member

Did you do this step here? https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md#step-2--attach-the-volume-with-localpv-zfs i.e. creation of the ZV CR of openebs zfs?

@Abhinandan-Purkait Abhinandan-Purkait added bug Something isn't working. need more info More information is needed from user, need reproduction steps labels Jul 16, 2024
@b1r3k
Copy link
Author

b1r3k commented Jul 26, 2024

Did you do this step here? https://github.com/openebs/zfs-localpv/blob/develop/docs/import-existing-volume.md#step-2--attach-the-volume-with-localpv-zfs i.e. creation of the ZV CR of openebs zfs?

thanks, that worked and then I hit another wall because of XFS created on ZVOL parition and not directly on ZVOL. For some reason partition does not get detected so mount fails.

@b1r3k b1r3k closed this as completed Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working. need more info More information is needed from user, need reproduction steps
Projects
None yet
Development

No branches or pull requests

2 participants