Skip to content

Latest commit

 

History

History
646 lines (543 loc) · 36.8 KB

cs_storage.md

File metadata and controls

646 lines (543 loc) · 36.8 KB
copyright lastupdated
years
2014, 2018
2018-04-12

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:download: .download}

Saving data in your cluster

{: #storage} You can persist data in {{site.data.keyword.containerlong}} to share data between app instances and to protect your data from being lost if a component in your Kubernetes cluster fails.

Planning highly available storage

{: #planning}

In {{site.data.keyword.containerlong_notm}} you can choose from several options to store your app data and share data across pods in your cluster. However, not all storage options offer the same level of persistence and availability in situations where a component in your cluster or a whole site fails. {: shortdesc}

Non-persistent data storage options

{: #non_persistent}

You can use non-persistent storage options if your data is not required to be persistently stored, so that you can recover it after a component in your cluster fails, or if data does not need to be shared across app instances. Non-persistent storage options can also be used to unit-test your app components or try out new features. {: shortdesc}

The following image shows available non-persistent data storage options in {{site.data.keyword.containerlong_notm}}. These options are available for free and standard clusters.

Non-persistent data storage options

Table. Non-persistent storage options
Option Description
1. Inside the container or pod Containers and pods are, by design, short-lived and can fail unexpectedly. However, you can write data to the local file system of the container to store data throughout the lifecycle of the container. Data inside a container cannot be shared with other containers or pods and is lost when the container crashes or is removed. For more information, see [Storing data in a container](https://docs.docker.com/storage/).
2. On the worker node Every worker node is set up with primary and secondary storage that is determined by the machine type that you select for your worker node. The primary storage is used to store data from the operating system and can be accessed by using a [Kubernetes hostPath volume ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath). The secondary storage is used to store data in /var/lib/docker, the directory that all the container data is written to. You can access the secondary storage by using a [Kubernetes emptyDir volume ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)

While hostPath volumes are used to mount files from the worker node file system to your pod, emptyDir creates an empty directory that is assigned to a pod in your cluster. All containers in that pod can read from and write to that volume. Because the volume is assigned to one specific pod, data cannot be shared with other pods in a replica set.

A hostPath or emptyDir volume and its data are removed when:

  • The worker node is deleted.
  • The worker node is reloaded or updated.
  • The cluster is deleted.
  • The {{site.data.keyword.Bluemix_notm}} account reaches a suspended state.

In addtion, data in an emptyDir volume is removed when:

  • The assigned pod is permanently deleted from the worker node.
  • The assigned pod is scheduled on another worker node.

Note: If the container inside the pod crashes, the data in the volume is still available on the worker node.

Persistent data storage options for high availability

{: persistent}

The main challenge when you create highly available stateful apps is to persist data across multiple app instances in multiple locations, and to keep data in sync at all times. For high available data, you want to make sure that you have a master database with multiple instances that are spread across multiple data centers or even multiple regions, and that data in this master is continuously replicated. All instances in your cluster must read from and write to this master database. In case one instance of the master is down, other instances can take over the workload, so that you do not experience downtime for your apps. {: shortdesc}

The following image shows the options that you have in {{site.data.keyword.containerlong_notm}} to make your data highly available in a standard cluster. The option that is right for you depends on the following factors:

  • The type of app that you have: For example, you might have an app that must store data on a file basis rather than inside a database.
  • Legal requirements for where to store and route the data: For example, you might be obligated to store and route data in the United States only and you cannot use a service that is located in Europe.
  • Backup and restore options: Every storage options comes with capabilities to backup and restore data. Check that available backup and restore options meet the requirements of your disaster recovery plan, such as the frequency of backups or the capabilities of storing data outside your primary data center.
  • Global replication: For high availability, you might want to set up multiple instances of storage that are distributed and replicated across data centers worldwide.

High availability options for persistent storage

Table. Persistent storage options
Option Description
1. NFS file storage With this option, you can persist app and container data by using Kubernetes persistent volumes. Volumes are hosted on [Endurance and Performance NFS-based file storage ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.ibm.com/cloud/file-storage/details) which can be used for apps that store data on a file basis rather than in a database. File storage is encrypted at REST.

{{site.data.keyword.containershort_notm}} provides predefined storage classes that define the range of sizes of the storage, IOPS, the delete policy, and the read and write permissions for the volume. To initiate a request for NFS-based file storage, you must create a [persistent volume claim (PVC)](cs_storage.html#create). After you submit a PVC, {{site.data.keyword.containershort_notm}} dynamically provisions a persistent volume that is hosted on NFS-based file storage. [You can mount the PVC](cs_storage.html#app_volume_mount) as a volume to your deployment to allow the containers to read from and write to the volume.

Persistent volumes are provisioned in the data center where the worker node is located. You can share data across the same replica set or with other deployments in the same cluster. You cannot share data across clusters when they are located in different data centers or regions.

By default, NFS storage is not backed up automatically. You can set up a periodic backup for your cluster by using the provided [backup and restore mechanisms](cs_storage.html#backup_restore). When a container crashes or a pod is removed from a worker node, the data is not removed and can still be accessed by other deployments that mount the volume.

Note: Persistent NFS file share storage is charged on a monthly basis. If you provision persistent storage for your cluster and remove it immediately, you still pay the monthly charge for the persistent storage, even if you used it only for a short amount of time.

2. Cloud database service With this option, you can persist data by using an {{site.data.keyword.Bluemix_notm}} database cloud service, such as [IBM Cloudant NoSQL DB](/docs/services/Cloudant/getting-started.html#getting-started-with-cloudant). Data that is stored with this option can be accessed across clusters, locations, and regions.

You can choose to configure a single database instance that all your apps access, or to [set up multiple instances across data centers and replication](/docs/services/Cloudant/guides/active-active.html#configuring-cloudant-nosql-db-for-cross-region-disaster-recovery) between the instances for higher availability. In IBM Cloudant NoSQL database, data is not backed up automatically. You can use the provided [backup and restore mechanisms](/docs/services/Cloudant/guides/backup-cookbook.html#cloudant-nosql-db-backup-and-recovery) to protect your data from a site failure.

To use a service in your cluster, you must [bind the {{site.data.keyword.Bluemix_notm}} service](cs_integrations.html#adding_app) to a namespace in your cluster. When you bind the service to the cluster, a Kubernetes secret is created. The Kubernetes secret holds confidential information about the service, such as the URL to the service, your user name, and password. You can mount the secret as a secret volume to your pod and access the service by using the credentials in the secret. By mounting the secret volume to other pods, you can also share data between pods. When a container crashes or a pod is removed from a worker node, the data is not removed and can still be accessed by other pods that mount the secret volume.

Most {{site.data.keyword.Bluemix_notm}} database services provide disk space for a small amount of data at no cost, so you can test its features.

3. On-prem database If your data must be stored on-site for legal reasons, you can [set up a VPN connection](cs_vpn.html#vpn) to your on-premise database and use existing storage, backup and replication mechanisms in your data center.

{: caption="Table. Persistent data storage options for deployments in Kubernetes clusters" caption-side="top"}


Using existing NFS file shares in clusters

{: #existing}

If you already have existing NFS file shares in your IBM Cloud infrastructure (SoftLayer) account that you want to use with Kubernetes, you can do so by creating a persistent volume (PV) for your existing storage. {:shortdesc}

A persistent volume (PV) is a Kubernetes resource that represents an actual storage device that is provisioned in a data center. Persistent volumes abstract the details of how a specific storage type is provisioned by {{site.data.keyword.Bluemix_notm}} Storage. To mount a PV to your cluster, you must request persistent storage for your pod by creating a persistent volume claim (PVC). The following diagram illustrates the relationship between PVs and PVCs.

Create persistent volumes and persistent volume claims

As depicted in the diagram, to enable existing NFS file shares to be used with Kubernetes, you must create PVs with a certain size and access mode and create a PVC that matches the PV specification. If the PV and PVC match, they are bound to each other. Only bound PVCs can be used by the cluster user to mount the volume to a deployment. This process is referred to as static provisioning of persistent storage.

Before you begin, make sure that you have an existing NFS file share that you can use to create your PV. For example, if you previously created a PVC with a retain storage class policy, you can use that retained data in the existing NFS file share for this new PVC.

Note: Static provisioning of persistent storage only applies to existing NFS file shares. If you do not have existing NFS file shares, cluster users can use the dynamic provisioning process to add PVs.

To create a PV and matching PVC, follow these steps.

  1. In your IBM Cloud infrastructure (SoftLayer) account, look up the ID and path of the NFS file share where you want to create your PV object. In addition, authorize the file storage to the subnets in the cluster. This authorization gives your cluster access to the storage.

    1. Log in to your IBM Cloud infrastructure (SoftLayer) account.
    2. Click Storage.
    3. Click File Storage and from the Actions menu, select Authorize Host.
    4. Select Subnets.
    5. From the drop down list, select the private VLAN subnet that your worker node is connected to. To find the subnet of your worker node, run bx cs workers <cluster_name> and compare the Private IP of your worker node with the subnet that you found in the drop down list.
    6. Click Submit.
    7. Click the name of the file storage.
    8. Make note the Mount Point field. The field is displayed as <server>:/<path>.
  2. Create a storage configuration file for your PV. Include the server and path from the file storage Mount Point field.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: mypv
    spec:
     capacity:
       storage: "20Gi"
     accessModes:
       - ReadWriteMany
     nfs:
       server: "nfslon0410b-fz.service.networklayer.com"
       path: "/IBM01SEV8491247_0908/data01"
    

    {: codeblock}

    Table. Understanding the YAML file components
    Idea icon Understanding the YAML file components
    name Enter the name of the PV object that you want to create.
    spec/capacity/storage Enter the storage size of the existing NFS file share. The storage size must be written in gigabytes, for example, 20Gi (20 GB) or 1000Gi (1 TB), and the size must match the size of the existing file share.
    accessMode Access modes define the way that the PVC can be mounted to a worker node.
    • ReadWriteOnce (RWO): The PV can be mounted to deployments in a single worker node only. Containers in deployments that are mounted to this PV can read from and write to the volume.
    • ReadOnlyMany (ROX): The PV can be mounted to deployments that are hosted on multiple worker nodes. Deployments that are mounted to this PV can only read from the volume.
    • ReadWriteMany (RWX): This PV can be mounted to deployments that are hosted on multiple worker nodes. Deployments that are mounted to this PV can read from and write to the volume.
    spec/nfs/server Enter the NFS file share server ID.
    path Enter the path to the NFS file share where you want to create the PV object.
  3. Create the PV object in your cluster.

    kubectl apply -f <yaml_path>
    

    {: pre}

    Example

    kubectl apply -f deploy/kube-config/pv.yaml
    

    {: pre}

  4. Verify that the PV is created.

    kubectl get pv
    

    {: pre}

  5. Create another configuration file to create your PVC. In order for the PVC to match the PV object that you created earlier, you must choose the same value for storage and accessMode. The storage-class field must be empty. If any of these fields do not match the PV, then a new PV is created automatically instead.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: mypvc
     annotations:
       volume.beta.kubernetes.io/storage-class: ""
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: "20Gi"
    

    {: codeblock}

  6. Create your PVC.

    kubectl apply -f deploy/kube-config/mypvc.yaml
    

    {: pre}

  7. Verify that your PVC is created and bound to the PV object. This process can take a few minutes.

    kubectl describe pvc mypvc
    

    {: pre}

    Your output looks similar to the following.

    Name: mypvc
    Namespace: default
    StorageClass:	""
    Status: Bound
    Volume: pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    Labels: <none>
    Capacity: 20Gi
    Access Modes: RWX
    Events:
      FirstSeen LastSeen Count From        SubObjectPath Type Reason Message
      --------- -------- ----- ----        ------------- -------- ------ -------
      3m 3m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal Provisioning External provisioner is provisioning volume for claim "default/my-persistent-volume-claim"
      3m 1m	 10 {persistentvolume-controller } Normal ExternalProvisioning cannot find provisioner "ibm.io/ibmc-file", expecting that a volume for the claim is provisioned either manually or via external software
      1m 1m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal ProvisioningSucceeded	Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    

    {: screen}

You successfully created a PV object and bound it to a PVC. Cluster users can now mount the PVC to their deployments and start reading from and writing to the PV object.


Adding NFS file storage to apps

{: #create}

Create a persistent volume claim (PVC) to provision NFS file storage for your cluster. Then, mount this claim to a persistent volume (PV) to ensure that data is available even if the pods crash or shut down. {:shortdesc}

The NFS file storage that backs the PV is clustered by IBM in order to provide high availability for your data. The storage classes describe the types of storage offerings available and define aspects such as the data retention policy, size in gigabytes, and IOPS when you create your PV.

Before you begin: If you have a firewall, allow egress access for the IBM Cloud infrastructure (SoftLayer) IP ranges of the locations (data centers) that your clusters are in, so that you can create PVCs.

To add persistent storage:

  1. Review the available storage classes. {{site.data.keyword.containerlong}} provides pre-defined storage classes for NFS file storage so that the cluster admin does not have to create any storage classes. The ibmc-file-bronze storage class is the same as the default storage class.

    kubectl get storageclasses
    

    {: pre}

    $ kubectl get storageclasses
    NAME                         TYPE
    default                      ibm.io/ibmc-file
    ibmc-file-bronze (default)   ibm.io/ibmc-file
    ibmc-file-custom             ibm.io/ibmc-file
    ibmc-file-gold               ibm.io/ibmc-file
    ibmc-file-retain-bronze      ibm.io/ibmc-file
    ibmc-file-retain-custom      ibm.io/ibmc-file
    ibmc-file-retain-gold        ibm.io/ibmc-file
    ibmc-file-retain-silver      ibm.io/ibmc-file
    ibmc-file-silver             ibm.io/ibmc-file
    

    {: screen}

    Tip: If you want to change the default storage class, run kubectl patch storageclass <storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' and replace <storageclass> with the name of the storage class.

  2. Decide if you want to keep your data and the NFS file share after you delete the PVC.

    • If you want to keep your data, then choose a retain storage class. When you delete the PVC, the PV is removed, but the NFS file and your data still exist in your IBM Cloud infrastructure (SoftLayer) account. Later, to access this data in your cluster, create a PVC and a matching PV that refers to your existing NFS file.
    • If you want the data and your NFS file share to be deleted when you delete the PVC, choose a storage class without retain.
  3. If you choose a bronze, silver, or gold storage class: You get Endurance storage External link icon that defines the IOPS per GB for each class. However, you can determine the total IOPS by choosing a size within the available range. You can select any whole number of gigabyte sizes within the allowed size range (such as 20 Gi, 256 Gi, 11854 Gi). For example, if you select a 1000Gi file share size in the silver storage class of 4 IOPS per GB, your volume has a total of 4000 IOPS. The more IOPS your PV has, the faster it processes input and output operations. The following table describes the IOPS per gigabyte and size range for each storage class.

    Table of storage class size ranges and IOPS per gigabyte
    Storage class IOPS per gigabyte Size range in gigabytes
    Bronze (default) 2 IOPS/GB 20-12000 Gi
    Silver 4 IOPS/GB 20-12000 Gi
    Gold 10 IOPS/GB 20-4000 Gi

    **Example command to show the details of a storage class**:

    kubectl describe storageclasses ibmc-file-silver
  4. If you choose the custom storage class: You get Performance storage External link icon and have more control over choosing the combination of IOPS and size. For example, if you select a size of 40Gi for your PVC, you can choose IOPS that is a multiple of 100 that is in the range of 100 - 2000 IOPS. The following table shows you what range of IOPS you can choose depending on the size that you select.

    Table of custom storage class size ranges and IOPS
    Size range in gigabytes IOPS range in multiples of 100
    20-39 Gi 100-1000 IOPS
    40-79 Gi 100-2000 IOPS
    80-99 Gi 100-4000 IOPS
    100-499 Gi 100-6000 IOPS
    500-999 Gi 100-10000 IOPS
    1000-1999 Gi 100-20000 IOPS
    2000-2999 Gi 200-40000 IOPS
    3000-3999 Gi 200-48000 IOPS
    4000-7999 Gi 300-48000 IOPS
    8000-9999 Gi 500-48000 IOPS
    10000-12000 Gi 1000-48000 IOPS

    **Example command to show the details for a custom storage class**:

    kubectl describe storageclasses ibmc-file-retain-custom
  5. Decide whether you want to be billed on an hourly or monthly basis. By default, you are billed monthly.

  6. Create a configuration file to define your PVC and save the configuration as a .yaml file.

    • Example for bronze, silver, gold storage classes: The following .yaml file creates a claim that is named mypvc of the "ibmc-file-silver" storage class, billed "hourly", with a gigabyte size of 24Gi.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        annotations:
          volume.beta.kubernetes.io/storage-class: "ibmc-file-silver"
        labels:
          billingType: "hourly"
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 24Gi
      

      {: codeblock}

    • Example for custom storage classes: The following .yaml file creates a claim that is named mypvc of the storage class ibmc-file-retain-custom, billed at the default of "monthly", with a gigabyte size of 45Gi and IOPS of "300".

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        annotations:
          volume.beta.kubernetes.io/storage-class: "ibmc-file-retain-custom"
        labels:
          billingType: "monthly"
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 45Gi
            iops: "300"
      

      {: codeblock}

      Table. Understanding the YAML file components
      Idea icon Understanding the YAML file components
      metadata/name Enter the name of the PVC.
      metadata/annotations Specify the storage class for the PV:
      • ibmc-file-bronze / ibmc-file-retain-bronze : 2 IOPS per GB.
      • ibmc-file-silver / ibmc-file-retain-silver: 4 IOPS per GB.
      • ibmc-file-gold / ibmc-file-retain-gold: 10 IOPS per GB.
      • ibmc-file-custom / ibmc-file-retain-custom: Multiple values of IOPS available.

      If you do not specify a storage class, the PV is created with the default storage class.

      **Tip:** If you want to change the default storage class, run kubectl patch storageclass <storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' and replace <storageclass> with the name of the storage class.

      metadata/labels/billingType Specify the frequency for which your storage bill is calculated, "monthly" or "hourly". The default is "monthly".
      spec/resources/requests/storage Enter the size of the file storage, in gigabytes (Gi). Choose a whole number within the allowable size range.

      Note: After your storage is provisioned, you cannot change the size of your NFS file share. Make sure to specify a size that matches the amount of data that you want to store.
      spec/resources/requests/iops This option is for custom storage classes only (`ibmc-file-custom / ibmc-file-retain-custom`). Specify the total IOPS for the storage, selecting a multiple of 100 within the allowable range. To see all options, run `kubectl describe storageclasses `. If you choose an IOPS other than one that is listed, the IOPS is rounded up.
  7. Create the PVC.

    kubectl apply -f <local_file_path>
    

    {: pre}

  8. Verify that your PVC is created and bound to the PV. This process can take a few minutes.

    kubectl describe pvc mypvc
    

    {: pre}

    Example output:

    Name:		mypvc
    Namespace:	default
    StorageClass:	""
    Status:		Bound
    Volume:		pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    Labels:		<none>
    Capacity:	20Gi
    Access Modes:	RWX
    Events:
      FirstSeen	LastSeen	Count	From								SubObjectPath	Type		Reason			Message
      ---------	--------	-----	----								-------------	--------	------			-------
      3m		3m		1	{ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 }			Normal		Provisioning		External provisioner is provisioning volume for claim "default/my-persistent-volume-claim"
      3m		1m		10	{persistentvolume-controller }							Normal		ExternalProvisioning	cannot find provisioner "ibm.io/ibmc-file", expecting that a volume for the claim is provisioned either manually or via external software
      1m		1m		1	{ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 }			Normal		ProvisioningSucceeded	Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    
    

    {: screen}

  9. {: #app_volume_mount}To mount the PVC to your deployment, create a configuration .yaml file.

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: <deployment_name>
      labels:
        app: <deployment_label>
    spec:
      selector:
        matchLabels:
          app: <app_name>
      template:
        metadata:
          labels:
            app: <app_name>
        spec:
          containers:
          - image: <image_name>
            name: <container_name>
            volumeMounts:
            - name: <volume_name>
              mountPath: /<file_path>
          volumes:
          - name: <volume_name>
            persistentVolumeClaim:
              claimName: <pvc_name>
    

    {: codeblock}

    Table. Understanding the YAML file components
    Idea icon Understanding the YAML file components
    metadata/labels/app A label for the deployment.
    spec/selector/matchLabels/app
    spec/template/metadata/labels/app
    A label for your app.
    template/metadata/labels/app A label for the deployment.
    spec/containers/image The name of the image that you want to use. To list available images in your {{site.data.keyword.registryshort_notm}} account, run `bx cr image-list`.
    spec/containers/name The name of the container that you want to deploy to your cluster.
    spec/containers/volumeMounts/mountPath The absolute path of the directory to where the volume is mounted inside the container.
    spec/containers/volumeMounts/name The name of the volume to mount to your pod.
    volumes/name The name of the volume to mount to your pod. Typically this name is the same as volumeMounts/name.
    volumes/persistentVolumeClaim/claimName The name of the PVC that you want to use as your volume. When you mount the volume to the pod, Kubernetes identifies the PV that is bound to the PVC and enables the user to read from and write to the PV.
  10. Create the deployment and mount the PVC.

    kubectl apply -f <local_yaml_path>
    

    {: pre}

  11. Verify that the volume is successfully mounted.

    kubectl describe deployment <deployment_name>
    

    {: pre}

    The mount point is in the Volume Mounts field and the volume is in the Volumes field.

     Volume Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro)
          /volumemount from myvol (rw)
    ...
    Volumes:
      myvol:
        Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:	mypvc
        ReadOnly:	false
    

    {: screen}

{: #nonroot} {: #enabling_root_permission}

NFS permissions: Looking for documentation on enabling NFS non-root permissions? See Adding non-root user access to NFS file storage.


Setting up backup and restore solutions for NFS file shares

{: #backup_restore}

File shares are provisioned into the same location as your cluster. The storage is hosted on clustered servers by {{site.data.keyword.IBM_notm}} to provide availability in case a server goes down. However, file shares are not backed up automatically and might be inaccessible if the entire location fails. To protect your data from being lost or damaged, you can set up periodic backups that you can use to restore your data when needed. {: shortdesc}

Review the following backup and restore options for your NFS file shares:

Set up periodic snapshots

You can set up [periodic snapshots](/docs/infrastructure/FileStorage/snapshots.html) for your NFS file share, which is a read-only image that captures the state of the instance at a point in time. Snapshots are stored on the same file share within the same location. You can restore data from a snapshot if a user accidentally removes important data from the volume.

For more information, see [periodic snapshots](/docs/infrastructure/FileStorage/snapshots.html) for your NFS file share.

Replicate snapshots to another location

To protect your data from a location failure, you can [replicate snapshots](/docs/infrastructure/FileStorage/replication.html#working-with-replication) to an NFS file share instance that is set up in another location. Data can be replicated from the primary storage to the backup storage only. You cannot mount a replicated NFS file share instance to a cluster. When your primary storage fails, you can manually set your replicated backup storage to be the primary one. Then, you can mount it to your cluster. After your primary storage is restored, you can restore the data from the backup storage.

For more information, see [replicate snapshots](/docs/infrastructure/FileStorage/replication.html#working-with-replication) to an NFS file share.

Duplicate storage

You can duplicate your NFS file share instance in the same location as the original storage instance. A duplicate has the same data as the original storage instance at the point in time that you create the duplicate. Unlike replicas, use the duplicate as a completely independent storage instance from the original. To duplicate, first set up snapshots for the volume.

For more information, see [creating a duplicate NFS file storage](/docs/infrastructure/FileStorage/how-to-create-duplicate-volume.html#creating-a-duplicate-file-storage).

Backup data to Object Storage

You can use the [**ibm-backup-restore image**](/docs/services/RegistryImages/ibm-backup-restore/index.html#ibmbackup_restore_starter) to spin up a backup and restore pod in your cluster. This pod contains a script to run a one-time or periodic backup for any persistent volume claim (PVC) in your cluster. Data is stored in your {{site.data.keyword.objectstoragefull}} instance that you set up in a location.

To make your data even more highly available and protect your app from a location failure, set up a second {{site.data.keyword.objectstoragefull}} instance and replicate data across locations. If you need to restore data from your {{site.data.keyword.objectstoragefull}} instance, use the restore script that is provided with the image.

Copy data to and from pods and containers

You can use the `kubectl cp` [command![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp) to copy files and directories to and from pods or specific containers in your cluster.

Before you begin, [target your Kubernetes CLI](cs_cli_install.html#cs_cli_configure) to the cluster that you want to use. If you do not specify a container with -c, the command uses to the first available container in the pod.

You can use the command in various ways:

  • Copy data from your local machine to a pod in your cluster: kubectl cp <local_filepath>/<filename> <namespace>/<pod>:<pod_filepath>
  • Copy data from a pod in your cluster to your local machine: kubectl cp <namespace>/<pod>:<pod_filepath>/<filename> <local_filepath>/<filename>
  • Copy data from a pod in your cluster to a specific container in another pod another: kubectl cp <namespace>/<pod>:<pod_filepath> <namespace>/<other_pod>:<pod_filepath> -c <container>