Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMStroage pod can't start when an additional volume mount to the storageDataPath #784

Open
linjunzhe opened this issue Oct 15, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@linjunzhe
Copy link

linjunzhe commented Oct 15, 2023

For example, I have following VMCluster CR

apiVersion: operator.victoriametrics.com/v1beta1
kind: VMCluster
metadata:
  name: vmcluster
spec:
  vmstorage:
    storageDataPath: "/storage"
    volumes:
      - hostPath:
          path: /data/victoria/storage
          type: ""
        name: vmstorage-volume
    volumeMounts:
      - mountPath: /storage
        name: vmstorage-volume
...

I just take hostPath as an example here, it cloud be other kind of volume type like cephfs or nfs. The point is I want vmstorage write data to the volume I mounted, not a PVC or emptydir defined in VMCluster.Spec.VMStorage.Storage. But today VM Operator will create a emptydir as a default behavior and mount it to the path defined in storageDataPath. Which in this case, will cause a conflict and vmstorage pod can't start with following errors.

Events:
  Type     Reason            Age                  From                    Message
  ----     ------            ----                 ----                    -------
  Warning  FailedCreate      22s (x16 over 3m5s)  statefulset-controller  create Pod vmstorage-vmcluster-0 in StatefulSet vmstorage-vmcluster failed error: Pod "vmstorage-vmcluster-0" is invalid: spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/storage": must be unique

The pod will have 2 volumeMount in the same path.

      containers:
      - name: vmstorage
         ......
        volumeMounts:
        - mountPath: /storage
          name: vmstorage-db
        - mountPath: /storage
          name: vmstorage-volume
      volumes:
      - hostPath:
          path: /data/victoria/storage
          type: ""
        name: vmstorage-volume
      - emptyDir: {}
        name: vmstorage-db

My proposal is to add a check before appending the default VolumeMount to StorageDataPath if there's already an additional VolumeMount's mountPath equals the StorageDataPath, and skip the default VolumeMount if it's true. I can contribute the code if the proposal is accepted.

@Haleygo
Copy link
Contributor

Haleygo commented Oct 16, 2023

Hello!
Thanks for the detail report, and yes, the behavior should be improved.
But I think it's little hacky to check if one of themountPath is equal with the StorageDataPath. I think we can extend the current StorageSpec, make it support more VolumeSource like native Volume. In that way, StorageSpec can still remain independent and serves StorageDataPath better with feasible default behavior.
wdyt @linjunzhe @f41gh7

@Haleygo Haleygo added the enhancement New feature or request label Oct 16, 2023
@f41gh7
Copy link
Collaborator

f41gh7 commented Mar 6, 2024

Hello! Thanks for the detail report, and yes, the behavior should be improved. But I think it's little hacky to check if one of themountPath is equal with the StorageDataPath. I think we can extend the current StorageSpec, make it support more VolumeSource like native Volume. In that way, StorageSpec can still remain independent and serves StorageDataPath better with feasible default behavior. wdyt @linjunzhe @f41gh7

I think, the correct approach to validate user input and throw an error in this case ( /storage is reserved mount and cannot be used for volume mounts). It also a case for any other reserved mount names.

vmstorage doesn't support read-write many option and since all pod have the same configuration, it'll be a conflict with lock file.

Proper solution for this case, create a storage provision (like https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) and use it for claimTemplate definition.

Provisioner must create subfolders for cluster pod and it'll not cause any conflicts with storage access.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants