Skip to content

Latest commit

 

History

History
183 lines (132 loc) · 7.78 KB

vpc-bm-vmware-nfs.md

File metadata and controls

183 lines (132 loc) · 7.78 KB
subcollection copyright lastupdated lasttested content-type services account-plan completion-time use-case
solution-tutorials
years
2024
2024-01-05
tutorial
vpc, vmwaresolutions, vpc-file-storage
paid
1h
ApplicationModernization, Vmware

{{site.data.keyword.attribute-definition-list}}

Provision NFS storage and attach to cluster

{: #vpc-bm-vmware-nfs} {: toc-content-type="tutorial"} {: toc-services="vpc, vmwaresolutions, vpc-file-storage"} {: toc-completion-time="1h"}

This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage. {: tip}

File Storage in {{site.data.keyword.vpc_short}} is available for customers with special approval to preview this service in the selected regions. Contact your IBM Sales representative if you are interested in getting access. {: beta}

This tutorial is part of series, and requires that you have completed the related tutorials in the presented order. {: important}

In this tutorial, an NFS file share is created in {{site.data.keyword.vpc_short}} and it is attached to a VMware cluster as a Datastore. This phase is optional, if you use vSAN as your preferred storage option. {: shortdesc}

Objectives

{: #vpc-bm-vmware-nfs-objectives}

In this tutorial, an {{site.data.keyword.vpc_short}} file share is created and you will attach this to the VMware Cluster as a datastore via NFS.

NFS as a Datastore{: caption="Figure 1. NFS as a Datastore" caption-side="bottom"}

  1. Create file share in {{site.data.keyword.vpc_short}}
  2. Attach file share as a Datastore for a Compute Cluster in vCenter

Before you begin

{: #vpc-bm-vmware-nfs-prereqs}

This tutorial requires:

  • Common prereqs for VMware Deployment tutorials in {{site.data.keyword.vpc_short}}

This tutorial is part of series, and requires that you have completed the related tutorials. Make sure you have successfully completed the required previous steps:

Login with IBM Cloud CLI with username and password, or use the API key. Select your target region and your preferred resource group.

When advised to use Web browser, use the Jump machine provisioned in the {{site.data.keyword.vpc_short}} provisioning tutorial. This Jump machine has network access to the hosts, the private DNS service and vCenter IP to be provisioned. Use url with FQDN, e.g. https://vcenter.vmware.ibmcloud.local as used in this example. {: note}

The used variables e.g. $VMWARE_VPC are defined in the previous steps of this tutorial. {: note}

Create file share in {{site.data.keyword.vpc_short}}

{: #vpc-bm-vmware-nfs-createfileshare} {: step}

To Create a file share in {{site.data.keyword.vpc_short}} you can use either CLI or UI (or API).

  1. The following provides the reference when using CLI:

    ibmcloud is share-create --help

    {: codeblock}

  2. Check the available share profiles, and you can use the following command.

    ibmcloud is share-profiles

    {: codeblock}

    Example:

    ibmcloud is share-profiles
    Listing file share profiles in region eu-de under account IBM Cloud Acc as user [email protected]...
    Name          Family   
    custom-iops   custom   
    tier-3iops    tiered   
    tier-5iops    tiered
    tier-10iops   tiered 

    {: screen}

  3. Create a file share.

    In this example, a 1TB with 10IOPS/GB file share is created with using the previously created {{site.data.keyword.vpc_short}} as a targe. Record the file share's and the file share target's IDs.

    VMWARE_DATASTORE01=$(ibmcloud is share-create --name vmware-nfs-datastore-01 --zone eu-de-1 --profile tier-10iops --size 1000 --targets '[{"name": "vmware-cluster-01", "vpc": {"id": "'$VMWARE_VPC'"}}]' --output json | jq -r .id)

    {: codeblock}

    VMWARE_DATASTORE01_TARGET01=$(ibmcloud is share $VMWARE_DATASTORE01 --output json | jq -r .targets[0].id)

    {: codeblock}

  4. For mounting to the server, you need to get the defined target's NFS mount path.

    VMWARE_DATASTORE01_TARGET01_MOUNTPATH=$(ibmcloud is share-target $VMWARE_DATASTORE01 $VMWARE_DATASTORE01_TARGET01 --output json | jq -r .mount_path)

    {: codeblock}

    echo "Mount path is : "$VMWARE_DATASTORE01_TARGET01_MOUNTPATH

    {: codeblock}

    vCenter needs the values separated for mount path, server and folder. You can use the following commands to get the required values:

    echo "Server : "$(echo $VMWARE_DATASTORE01_TARGET01_MOUNTPATH | awk -F: '{print $1}')

    {: codeblock}

    echo "Folder : "$(echo $VMWARE_DATASTORE01_TARGET01_MOUNTPATH | awk -F: '{print $2}')

    {: codeblock}

  5. Use the Server and Folder values when configuring the datastore in vCenter.

Attach {{site.data.keyword.vpc_short}} File share as a Datastore for a Compute Cluster in vCenter

{: #vpc-bm-vmware-nfs-attachfileshare} {: step}

In the vSphere Client object navigator, browse to a host, a cluster, or a data center.

  1. From the right-click menu, select Storage > New Datastore.
  2. Select NFS as the datastore type and specify an NFS version as NFS 4.1.
  3. Enter the datastore parameters: Datastore name, Folder and Server. With NFS 4.1, you can add multiple IP addresses or server names if the NFS server supports trunking, IBM Cloud uses multiple IPs behind the provided FQDN. The ESXi host uses these values to achieve multipathing to the NFS server mount point.
  4. On Configure Kerberos authentication selection, select Don't use Kerberos authentication.
  5. On Host Accessibility, select all hosts on your cluster.
  6. Review the configuration options and click Finish.

The following parameters were used in this example.

General 
Name:  Datastore-VPC-NFS-01
Type:  NFS 4.1

NFS settings
Server:  fsf-fra0251a-fz.adn.networklayer.com
Folder:  /nxg_s_voll_mz02b7_7e070ef6_12f5_4794_9077_953ba53dde82
Access Mode:  Read-write
Kerberos:  Disabled

Hosts that will have access to this datastore
Hosts:  esx-001.vmware.ibmcloud.local, esx-002.vmware.ibmcloud.local, esx-003.vmware.ibmcloud.local 

{: screen}

Your hosts will access NFS share using the ESXi hosts' management interfaces (PCI NICs) with this setup. This is for simplicity for this non-production setup.
{: note}

Next steps

{: #vpc-bm-vmware-nfs-next-steps}

The next step in the tutorial series is: