-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable CSI snapshots #41
Comments
I used ceph-csi 3.7.2, which is compatible with K8s 1.23: Per the linked docs, I installed the snapshot controller: root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2# ./scripts/install-snapshot.sh install
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2636 100 2636 0 0 56085 0 --:--:-- --:--:-- --:--:-- 56085
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1296 100 1296 0 0 43200 0 --:--:-- --:--:-- --:--:-- 43200
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/csi-snapshotter-psp created
role.rbac.authorization.k8s.io/csi-snapshotter-psp created
rolebinding.rbac.authorization.k8s.io/csi-snapshotter-psp created
deployment.apps/snapshot-controller created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
snapshotter pod status: true
snapshot controller creation successful Created the RBD snap resources: root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/rbd# vim snapshotclass.yaml
root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/rbd# kubectl create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created
root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/rbd# kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
csi-rbdplugin-snapclass rbd.csi.ceph.com Delete 53s
root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/rbd# kubectl create -f snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created
root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/rbd# kubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
rbd-pvc-snapshot false rbd-pvc csi-rbdplugin-snapclass 12m
|
I continued on , creating CephFS snap resource: root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/cephfs# vim snapshotclass.yaml
root@docker-dev-ucsb-1:/home/outin/in/ceph-csi-3.7.2/examples/cephfs# kubectl create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-cephfsplugin-snapclass created I created a new |
Reopening as I didn't deploy this to production yet. |
I created a successful test RBD snapshot: snapshot.yaml ---
# Snapshot API version compatibility matrix:
# v1beta1:
# v1.17 =< k8s < v1.20
# 2.x =< snapshot-controller < v4.x
# v1:
# k8s >= v1.20
# snapshot-controller >= v4.x
# We recommend to use {sidecar, controller, crds} of same version
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: rbd-pvc-snapshot-test-2
spec:
volumeSnapshotClassName: csi-rbdplugin-snapclass
source:
persistentVolumeClaimName: test-pvc
|
I believe I am not able to deploy CephFS due to the lack of CephFS credentials / auto provisioning in our setup. I'm looking into this issue in #42 |
Now that dynamic provisioning of CephFS volumes is working I'm able to create CephFS volume snapshots, the following is working on k8s-dev:
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-cephfsplugin-snapclass
driver: cephfs.csi.ceph.com
parameters:
clusterID: 8aa4d4a0-a209-11ea-baf5-ffc787bfc812
snapshotNamePrefix: "k8s-dev-csi-snap-"
csi.storage.k8s.io/snapshotter-secret-name: csi-cephfs-secret
csi.storage.k8s.io/snapshotter-secret-namespace: default
deletionPolicy: Delete
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: cephfs-pvc-snapshot-test-2
namespace: nick
spec:
volumeSnapshotClassName: csi-cephfsplugin-snapclass
source:
persistentVolumeClaimName: csi-cephfs-pvc-test-12 outin@halt:~/k8s$ kubectl create -f cephfs-pvc-snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot-test-2 created
outin@halt:~/k8s$ kubectl get volumesnapshot -n nick
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
cephfs-pvc-snapshot-test-2 true csi-cephfs-pvc-test-12 10Gi csi-cephfsplugin-snapclass snapcontent-06183985-06f5-47e2-83a1-81aac6aeab47 6s 8s Next up is to configure and test CSI RBD and CephFS snapshots on k8s-prod. |
Velero is failing to complete backups during the initial backup runs in #37. It appears that this is due to lack of snapshot support:
Snapshot support should be enabled on both K8s dev and prod, and can be enabled with the instructions in https://github.com/ceph/ceph-csi/blob/devel/docs/snap-clone.md
The text was updated successfully, but these errors were encountered: