We show how to install Kasten when you have no storage provisioner at all on your cluster.
Some cluster may not have storage provisioner at all but still need Kasten to protect their applications and eventually work with blueprint to backup internal or external databases.
However Kasten expect that a storage class is present on the cluster so that it can create his own PVC for the inventory databases (the catalog) or run the jobs queue, prometheus, grafana or logging.
Kasten don't need a very high storage performance for it's internal functionning, a simple internal NFS server will let us create a NFS provisionner on which Kasten will be able to run.
Note : if you don't have a NFS server available you can create one on kubernetes itself check this section.
Set up this two variables
nfsIp=<Your NFS IP>
nfsPath=<Your NFS path on the server>
and install the NFS Sub dir provisionner
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm upgrade --install -n nfs-storage --create-namespace nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$nfsIp \
--set nfs.path=$nfsPath \
--set storageClass.name=k10-storage \
--set storageClass.archiveOnDelete=false
You now have a storage class k10-storage that you can use for installing kasten all you you have to do is specify the storage class k10-storage
helm repo add kasten https://charts.kasten.io/
helm repo update
helm upgrade k10 kasten/k10 --create-namespace --install -n kasten-io \
--set global.persistence.storageClass=k10-storage
For most of the user this step can be skipped because there is always an NFS server available on the data center.
But if you don't have it then you can create a nfs server directly on the cluster using hostpath or local storage.
Using local storage is more complex has we need to configure disk on the worker nodes.
We want something quick so let's use hostpath for our nfs server and use a node name to make sure this pod is alway scheduled on the same node.
If you lose this node then you'll have to :
- remove the nfs-provisioner (helm uninstall ...)
- remove kasten (helm inisntall)
- delete the nfs-server namespace
- reinstall the nfs-server
- reinstall the nfs provisionner
- reinstall kasten
- execute kasten disaster recovery
It's why having an external nfs share is preferable but it not always possible.
Let's create the nfs sever now, use kubectl get node
to set up the variable nodeName
:
nodeName=<your node name>
cat<<EOF | kubectl create -f -
apiVersion: v1
kind: Namespace
metadata:
name: nfs-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: nfs-storage
spec:
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
nodeName: $nodeName
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- name: storage
mountPath: /exports
volumes:
- name: storage
hostPath:
path: /data/nfs
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: nfs-server
namespace: nfs-storage
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
app: nfs-server
EOF
Check the nfs server is running
kubectl get po -n nfs-storage
You should get an output like this
NAME READY STATUS RESTARTS AGE
nfs-server-6b645cf6fd-ttv9b 1/1 Running 0 4h49m
Now you can retreive the ip of the nfs server and your path will be simply "/"
nfsIp=$(kubectl get svc -n nfs-storage nfs-server -o jsonpath='{.spec.clusterIP}')
nfsPath="/"
Now install the nfs provisionner.