|
| 1 | +## Install a OCP cluster with ARM64 Arch on Oracle Cloud Infrastructure (OCI) with CCM |
| 2 | + |
| 3 | +Install an OCP cluster in OCI with Platform External as an option and OCI Cloud Controler Manager. |
| 4 | + |
| 5 | +## Prerequisites |
| 6 | + |
| 7 | +- okd-installer Collection with [OCI dependencies installed](./oci-prerequisites.md): |
| 8 | +- Compartments used to launch the cluster created and exported to variable `${OCI_COMPARTMENT_ID}` |
| 9 | +- DNS Zone place the DNS zone and exported to variable `${OCI_COMPARTMENT_ID_DNS}` |
| 10 | +- Compartment used to store the RHCOS image exported to variable `${OCI_COMPARTMENT_ID_IMAGE}` |
| 11 | + |
| 12 | +Example: |
| 13 | + |
| 14 | +```bash |
| 15 | +cat <<EOF > ~/.oci/env |
| 16 | +# Compartment that the cluster will be installed |
| 17 | +OCI_COMPARTMENT_ID="<CHANGE_ME:ocid1.compartment.oc1.UUID>" |
| 18 | +
|
| 19 | +# Compartment that the DNS Zone is created (based domain) |
| 20 | +OCI_COMPARTMENT_ID_DNS="<CHANGE_ME:ocid1.compartment.oc1.UUID>" |
| 21 | +
|
| 22 | +# Compartment that the OS Image will be created |
| 23 | +OCI_COMPARTMENT_ID_IMAGE="<CHANGE_ME:ocid1.compartment.oc1.UUID>" |
| 24 | +EOF |
| 25 | +source ~/.oci/env |
| 26 | +``` |
| 27 | + |
| 28 | +## Setup with Platform External type and CCM |
| 29 | + |
| 30 | +Create the vars file for okd-installer collection: |
| 31 | + |
| 32 | +```bash |
| 33 | +# MCO patch without revendor (w/o disabling FG) |
| 34 | +CLUSTER_NAME=oci-e414rc2arm1usash1 |
| 35 | +VARS_FILE=./vars-oci-ha_${CLUSTER_NAME}.yaml |
| 36 | + |
| 37 | +cat <<EOF > ${VARS_FILE} |
| 38 | +provider: oci |
| 39 | +cluster_name: ${CLUSTER_NAME} |
| 40 | +config_cluster_region: us-ashburn-1 |
| 41 | +
|
| 42 | +cluster_profile: ha |
| 43 | +destroy_bootstrap: no |
| 44 | +
|
| 45 | +#config_base_domain: splat-oci.devcluster.openshift.com |
| 46 | +config_base_domain: us-ashburn-1.splat-oci.devcluster.openshift.com |
| 47 | +
|
| 48 | +config_ssh_key: "$(cat ~/.ssh/openshift-dev.pub)" |
| 49 | +config_pull_secret_file: "${HOME}/.openshift/pull-secret-latest.json" |
| 50 | +
|
| 51 | +config_cluster_version: 4.14.0-rc.2 |
| 52 | +version: 4.14.0-rc.2 |
| 53 | +
|
| 54 | +config_platform: external |
| 55 | +config_platform_spec: '{"platformName":"oci"}' |
| 56 | +
|
| 57 | +oci_ccm_namespace: oci-cloud-controller-manager |
| 58 | +oci_compartment_id: ${OCI_COMPARTMENT_ID} |
| 59 | +oci_compartment_id_dns: ${OCI_COMPARTMENT_ID_DNS} |
| 60 | +oci_compartment_id_image: ${OCI_COMPARTMENT_ID_IMAGE} |
| 61 | +
|
| 62 | +# Available manifest paches (runs after 'create manifest' stage) |
| 63 | +config_patches: |
| 64 | +- rm-capi-machines |
| 65 | +- mc_varlibetcd |
| 66 | +- mc-kubelet-providerid |
| 67 | +- deploy-oci-ccm |
| 68 | +#- deploy-oci-csi |
| 69 | +
|
| 70 | +# MachineConfig to set the Kubelet environment. Will use this script to discover the ProviderID |
| 71 | +cfg_patch_kubelet_providerid_script: | |
| 72 | + PROVIDERID=\$(curl -H "Authorization: Bearer Oracle" -sL http://169.254.169.254/opc/v2/instance/ | jq -r .id); |
| 73 | +
|
| 74 | +# spread nodes between "AZs" |
| 75 | +oci_availability_domains: |
| 76 | +- gzqB:US-ASHBURN-AD-1 |
| 77 | +- gzqB:US-ASHBURN-AD-2 |
| 78 | +- gzqB:US-ASHBURN-AD-3 |
| 79 | +
|
| 80 | +oci_fault_domains: |
| 81 | +- FAULT-DOMAIN-1 |
| 82 | +- FAULT-DOMAIN-2 |
| 83 | +- FAULT-DOMAIN-3 |
| 84 | +
|
| 85 | +# OCI config for ARM64 |
| 86 | +config_default_architecture: arm64 |
| 87 | +compute_shape: "VM.Standard.A1.Flex" |
| 88 | +controlplane_shape: "VM.Standard.A1.Flex" |
| 89 | +bootstrap_instance: "VM.Standard.A1.Flex" |
| 90 | +
|
| 91 | +# Define the OS Image mirror |
| 92 | +os_mirror: yes |
| 93 | +os_mirror_from: stream_artifacts |
| 94 | +os_mirror_stream: |
| 95 | + architecture: aarch64 |
| 96 | + artifact: openstack |
| 97 | + format: qcow2.gz |
| 98 | +
|
| 99 | +os_mirror_to_provider: oci |
| 100 | +os_mirror_to_oci: |
| 101 | + compartment_id: ${OCI_COMPARTMENT_ID_IMAGE} |
| 102 | + bucket: rhcos-images |
| 103 | + image_type: QCOW2 |
| 104 | + # not supported yet, must be added for arm64 |
| 105 | + # https://oci-ansible-collection.readthedocs.io/en/latest/collections/oracle/oci/oci_compute_image_shape_compatibility_entry_module.html#ansible-collections-oracle-oci-oci-compute-image-shape-compatibility-entry-module |
| 106 | + compatibility_shapes: |
| 107 | + - name: VM.Standard.A1.Flex |
| 108 | + memory_constraints: |
| 109 | + min_in_gbs: 4 |
| 110 | + max_in_gbs: 128 |
| 111 | + ocpu_constraints: |
| 112 | + min: 2 |
| 113 | + max: 32 |
| 114 | +EOF |
| 115 | +``` |
| 116 | + |
| 117 | +## Install the cluster |
| 118 | + |
| 119 | +```bash |
| 120 | +ansible-playbook mtulio.okd_installer.create_all \ |
| 121 | + -e cert_max_retries=30 \ |
| 122 | + -e cert_wait_interval_sec=60 \ |
| 123 | + -e @$VARS_FILE |
| 124 | +``` |
| 125 | + |
| 126 | +### Approve certificates |
| 127 | + |
| 128 | +Export `KUBECONFIG`: |
| 129 | + |
| 130 | +```bash |
| 131 | +export KUBECONFIG=$HOME/.ansible/okd-installer/clusters/${CLUSTER_NAME}/auth/kubeconfig |
| 132 | +``` |
| 133 | + |
| 134 | +Check and Approve the certificates: |
| 135 | +```bash |
| 136 | +oc get csr \ |
| 137 | + -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' \ |
| 138 | + | xargs oc adm certificate approve |
| 139 | +``` |
| 140 | + |
| 141 | +Check if the nodes joined to the cluster: |
| 142 | + |
| 143 | +```bash |
| 144 | +oc get nodes |
| 145 | +``` |
| 146 | + |
| 147 | +## Testing |
| 148 | + |
| 149 | +Setup the test environment (internal registry, labeling and taint worker node, etc): |
| 150 | + |
| 151 | +```bash |
| 152 | +test_node=$(oc get nodes -l node-role.kubernetes.io/worker='' -o jsonpath='{.items[0].metadata.name}') |
| 153 | +oc label node $test_node node-role.kubernetes.io/tests="" |
| 154 | +oc adm taint node $test_node node-role.kubernetes.io/tests="":NoSchedule |
| 155 | +``` |
| 156 | + |
| 157 | +Run the tests: |
| 158 | + |
| 159 | +```bash |
| 160 | +./opct run -w &&\ |
| 161 | + ./opct retrieve &&\ |
| 162 | + ./opct report *.tar.gz --save-to /tmp/results --server-skip |
| 163 | +``` |
| 164 | + |
| 165 | +## Destroy the cluster |
| 166 | + |
| 167 | +```bash |
| 168 | +ansible-playbook mtulio.okd_installer.destroy_cluster -e @$VARS_FILE |
| 169 | +``` |
0 commit comments