-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI driver will not work in default configuration with topology enabled in provisioner #2970
Comments
May be a solution here is to not report topology capability in clusters where no topology information is configured/available. This will allow driver to work out of box with current version of csi-provisioner. Alternatively - I have considered emitting topology information even in clusters that are single zone, but that will require quite a bit of changes and also is manual process and hence clusters will break on upgrade. My personal preference would be a CLI flag, which can be specified while starting the driver. |
Another thing is - disabling the topology feature in csi-provisioner is apparently not enough. With latest version of csi-provisioner, vSphere CSI driver is unable to delete intree vSphere PVs - https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_vmware-vsphere-csi-driver-operator/241/pull-ci-openshift-vmware-vsphere-csi-driver-operator-master-e2e-vsphere/1816155462064672768 |
Moved to #2981 |
We're hitting this.
How to? We are currently stuck at provisioner v4.0.1 to workaround this. |
After kubernetes-csi/external-provisioner#1167 is merged, topology feature is enabled by default in csi-provisioner.
Now since vsphere CSI driver by default returns topology capability - pkg/csi/service/identity.go:65 even though cluster has no topology, all volume provisioning operations will fail.
cc @divyenpatel @xing-yang @jingxu97
The text was updated successfully, but these errors were encountered: