Skip to content

Commit

Permalink
oci/ccm: adapting to inject CCM manifests into install flow
Browse files Browse the repository at this point in the history
  • Loading branch information
mtulio committed Mar 16, 2023
1 parent 1be432e commit b39c6d7
Show file tree
Hide file tree
Showing 53 changed files with 1,426 additions and 585 deletions.
190 changes: 97 additions & 93 deletions docs/guides/installing-agnostic-oci.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,8 +160,9 @@ OCP_RELEASE_413="quay.io/openshift-release-dev/ocp-release:4.13.0-ec.4-x86_64"
EOF
source ~/.openshift/env

CLUSTER_NAME=oci-cr3cmo
cat <<EOF > ./vars-oci-ha_${CLUSTER_NAME}.yaml
CLUSTER_NAME=oci-t9
VAR_FILE=./vars-oci-ha_${CLUSTER_NAME}.yaml
cat <<EOF > ${VAR_FILE}
provider: oci
cluster_name: ${CLUSTER_NAME}
config_cluster_region: us-sanjose-1
Expand All @@ -176,7 +177,7 @@ cluster_profile: ha
destroy_bootstrap: no
config_base_domain: splat-oci.devcluster.openshift.com
config_ssh_key: "$(cat ~/.ssh/id_rsa.pub)"
config_ssh_key: "$(cat ~/.ssh/id_rsa.pub;cat ~/.ssh/openshift-dev.pub)"
config_pull_secret_file: "${HOME}/.openshift/pull-secret-latest.json"
#config_cluster_version: 4.13.0-ec.3-x86_64
Expand Down Expand Up @@ -221,7 +222,7 @@ os_mirror_to_oci:
config_patches:
- rm-capi-machines
#- platform-external-kubelet # PROBLEM hangin kubelete (network)
- mc-kubelet-env-workaround # PROBLEM hangin kubelet (network)
#- platform-external-kcmo
- deploy-oci-ccm
- yaml_patch # working for OCI, but need to know the path
Expand All @@ -232,10 +233,6 @@ cfg_patch_yaml_patch_specs:
- manifest: /manifests/cluster-infrastructure-02-config.yml
patch: '{"spec":{"platformSpec":{"type":"External","external":{"platformName":"oci"}}},"status":{"platform":"External","platformStatus":{"type":"External","external":{}}}}'
## OCI : Change the namespace from downloaded assets
#- manifest: /manifests/oci-cloud-controller-manager-02.yaml
# patch: '{"metadata":{"namespace":"oci-cloud-controller-manager"}}'
cfg_patch_line_regex_patch_specs:
- manifest: /manifests/oci-cloud-controller-manager-01-rbac.yaml
#search_string: 'namespace: kube-system'
Expand All @@ -246,15 +243,18 @@ cfg_patch_line_regex_patch_specs:
- manifest: /manifests/oci-cloud-controller-manager-02.yaml
regexp: '^(.*)(namespace\\: kube-system)$'
line: '\\1namespace: oci-cloud-controller-manager'
EOF
cfg_patch_kubelet_env_workaround_content: |
KUBELET_PROVIDERID=\$(curl -H "Authorization: Bearer Oracle" -sL http://169.254.169.254/opc/v2/instance/ | jq -r .id); echo "KUBELET_PROVIDERID=\$KUBELET_PROVIDERID}" | sudo tee -a /etc/kubernetes/kubelet-workaround
EOF

```

### Install the clients

```bash
ansible-playbook mtulio.okd_installer.install_clients -e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.install_clients -e @$VAR_FILE
```

### Create the Installer Configuration
Expand All @@ -263,161 +263,165 @@ Create the installation configuration:


```bash
ansible-playbook mtulio.okd_installer.config \
-e mode=create \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.config -e mode=create-config -e @$VAR_FILE
```

### Mirror the image
The rendered install-config.yaml will be available on the following path:

- Mirror image
- `~/.ansible/okd-installer/clusters/$CLUSTER_NAME/install-config.yaml`

> Example: `$ jq -r '.architectures["x86_64"].artifacts.openstack.formats["qcow2.gz"].disk.location' ~/.ansible/okd-installer/clusters/ocp-oci/coreos-stream.json`
If you want to skip this part, place your own install-config.yaml on the same
path and go to the next step.

```bash
ansible-playbook mtulio.okd_installer.os_mirror -e @./vars-oci-ha.yaml
```
### Create the Installer manifests

### Create the Network Stack
Create the installation configuration:

```bash
ansible-playbook mtulio.okd_installer.stack_network \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.config -e mode=create-manifests -e @$VAR_FILE
```

The manifests will be rendered and saved on the install directory:

- `~/.ansible/okd-installer/clusters/$CLUSTER_NAME/`

If you want to skip that part, with your own manifests, you must be able to run
the `openshift-install create manifests` under the install dir, and the file
`manifests/cluster-config.yaml` is created correctly.

The infrastructure manifest also must exist on path: `manifests/cluster-infrastructure-02-config.yml`.


**After this stage, the file `$install_dir/cluster_state.json` will be created and populated with the stack results.**

### IAM Stack

N/A

> TODO: create Compartment validations
### Create the Network Stack

```bash
ansible-playbook mtulio.okd_installer.stack_network -e @$VAR_FILE
```

### DNS Stack

```bash
ansible-playbook mtulio.okd_installer.stack_dns \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.stack_dns -e @$VAR_FILE
```

### Load Balancer Stack

```bash
ansible-playbook mtulio.okd_installer.stack_loadbalancer \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.stack_loadbalancer -e @$VAR_FILE
```

### Compute Stack
### Config Commit

#### Bootstrap
This stage allows the user to modify the cluster configurations (manifests),
then generate the ignition files used to create the cluster.

- Upload the bootstrap ignition to blob and Create the Bootstrap Instance
#### Manifest patches (pre-ign)

```bash
ansible-playbook mtulio.okd_installer.create_node \
-e node_role=bootstrap \
-e @./vars-oci-ha.yaml
```
> TODO/WIP
- Create the Control Plane nodes
In this step the playbooks will apply any patchs to the manifests,
according to the vars file `config_patches`.

```bash
ansible-playbook mtulio.okd_installer.create_node \
-e node_role=controlplane \
-e @./vars-oci-ha.yaml
```
The `config_patches` are predefined tasks that will run to reach specific goals.

- Create the Compute nodes
If you wouldn't like to apply patches, leave the empty value `config_patches: []`.

If you would like to apply patches manually, you can do it changing the manifests
on the install dir. Default install dir path: `~/.ansible/okd-installer/clusters/${cluster_name}/*`

```bash
ansible-playbook mtulio.okd_installer.create_node \
-e node_role=compute \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.config -e mode=patch-manifests -e @$VAR_FILE
```

> TODO: create instance Pool
#### Config generation (ignitions)

> TODO: Approve certificates (bash loop or use existing playbook)
> TODO/WIP
```bash
oc adm certificate approve $(oc get csr -o json |jq -r '.items[] | select(.status.certificate == null).metadata.name')
```
This steps should be the last before the configuration be 'commited':

### Create all
- `create ignitions` when using `openshift-install` as config provider
- `` when using `assisted installer` as a config provider

```bash
ansible-playbook mtulio.okd_installer.create_all \
-e certs_max_retries=20 \
-e cert_wait_interval_sec=60 \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.config -e mode=create-ignitions -e @$VAR_FILE
```

> TO DO: measure total time
<!-- #### Ignition patchs (Post)
## Review the cluster
> TODO? there's no used case to patch the ingnition files, and it's not recommended. So keeping this section hiden to the document for future review. -->

```bash
export KUBECONFIG=${HOME}/.ansible/okd-installer/clusters/${cluster_name}/auth/kubeconfig
### Mirror OS boot image

oc get nodes
oc get co
- Download image from URL provided by openshift-install coreos-stream

> Example: `$ jq -r '.architectures["x86_64"].artifacts.openstack.formats["qcow2.gz"].disk.location' ~/.ansible/okd-installer/clusters/ocp-oci/coreos-stream.json`
```bash
ansible-playbook mtulio.okd_installer.os_mirror -e @$VAR_FILE
```

## OPCT setup
### Compute Stack

- Create the OPCT [dedicated] node
#### Bootstrap node

> https://redhat-openshift-ecosystem.github.io/provider-certification-tool/user/#option-a-command-line
- Upload the bootstrap ignition to blob and Create the Bootstrap Instance

```bash
# Create OPCT node
ansible-playbook mtulio.okd_installer.create_node \
-e node_role=opct \
-e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.create_node -e node_role=bootstrap -e @$VAR_FILE
```

- OPCT dedicated node setup
#### Control Plane nodes

- Create the Control Plane nodes

```bash
ansible-playbook mtulio.okd_installer.create_node -e node_role=controlplane -e @$VAR_FILE
```

# Set the OPCT requirements (registry, labels, wait-for COs stable)
ansible-playbook ../opct/hack/opct-runner/opct-run-tool-preflight.yaml -e cluster_name=oci -D
#### Compute/worker nodes

oc label node opct-01.priv.ocp.oraclevcn.com node-role.kubernetes.io/tests=""
oc adm taint node opct-01.priv.ocp.oraclevcn.com node-role.kubernetes.io/tests="":NoSchedule
- Create the Compute nodes

```bash
ansible-playbook mtulio.okd_installer.create_node -e node_role=compute -e @$VAR_FILE
```

- OPCT regular
> TODO: create instance Pool
```bash
# Run OPCT
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 run -w
- Approve worker nodes certificates signing requests (CSR)

# Get the results and explore it
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 retrieve
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 results *.tar.gz
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 report *.tar.gz
```bash
oc adm certificate approve $(oc get csr -o json |jq -r '.items[] | select(.status.certificate == null).metadata.name')
```

- OPCT upgrade mode
### Create all

```bash
# from a cluster 4.12.1, run upgrade conformance to 4.13
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 run -w \
--mode=upgrade \
--upgrade-to-image=$(oc adm release info 4.13.0-ec.2 -o jsonpath={.image})

# Get the results and explore it
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 retrieve
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 results *.tar.gz
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 report *.tar.gz
ansible-playbook mtulio.okd_installer.create_all \
-e certs_max_retries=20 \
-e cert_wait_interval_sec=60 \
-e @$VAR_FILE
```

## Generate custom image
## Review the cluster

```
```bash
export KUBECONFIG=${HOME}/.ansible/okd-installer/clusters/${cluster_name}/auth/kubeconfig

oc get nodes
oc get co
```

## Destroy

```bash
ansible-playbook mtulio.okd_installer.destroy_cluster -e @./vars-oci-ha.yaml
ansible-playbook mtulio.okd_installer.destroy_cluster -e @$VAR_FILE
```
50 changes: 50 additions & 0 deletions docs/guides/validate-cluster-with-opct.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
## OPCT setup

- Create the OPCT [dedicated] node

> https://redhat-openshift-ecosystem.github.io/provider-certification-tool/user/#option-a-command-line
```bash
# Create OPCT node
ansible-playbook mtulio.okd_installer.create_node \
-e node_role=opct \
-e @./vars-oci-ha.yaml
```

- OPCT dedicated node setup

```bash

# Set the OPCT requirements (registry, labels, wait-for COs stable)
ansible-playbook ../opct/hack/opct-runner/opct-run-tool-preflight.yaml -e cluster_name=oci -D

oc label node opct-01.priv.ocp.oraclevcn.com node-role.kubernetes.io/tests=""
oc adm taint node opct-01.priv.ocp.oraclevcn.com node-role.kubernetes.io/tests="":NoSchedule

```

- OPCT regular

```bash
# Run OPCT
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 run -w

# Get the results and explore it
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 retrieve
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 results *.tar.gz
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 report *.tar.gz
```

- OPCT upgrade mode

```bash
# from a cluster 4.12.1, run upgrade conformance to 4.13
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 run -w \
--mode=upgrade \
--upgrade-to-image=$(oc adm release info 4.13.0-ec.2 -o jsonpath={.image})

# Get the results and explore it
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 retrieve
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 results *.tar.gz
~/opct/bin/openshift-provider-cert-linux-amd64-v0.3.0 report *.tar.gz
```
1 change: 1 addition & 0 deletions playbooks/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
- name: okd-installer | Installer Configuration
hosts: localhost
connection: local
#gather_facts: yes

roles:
- config
Loading

0 comments on commit b39c6d7

Please sign in to comment.