Skip to content

Commit

Permalink
do: add provisioning stacks specific for provider
Browse files Browse the repository at this point in the history
  • Loading branch information
mtulio committed Jun 10, 2023
1 parent 7602ba5 commit 76b962c
Show file tree
Hide file tree
Showing 21 changed files with 666 additions and 21 deletions.
213 changes: 213 additions & 0 deletions docs/guides/DigitalOcean/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
# Guides - DigitalOcean deployment with agnostic installation

Steps to install OKD Clusters in DigitalOcean with agnostic installation.

!!! warning "Development mode"
This page is under development

## Prerequisites

- Setup ansible workdir

```bash
mkdir okd-installer; cd okd-installer
cat <<EOF > ansible.cfg
[defaults]
inventory = ./inventories
collections_path=./collections
callbacks_enabled=ansible.posix.profile_roles,ansible.posix.profile_tasks
hash_behavior=merge
[inventory]
enable_plugins = yaml, ini
[callback_profile_tasks]
task_output_limit=1000
sort_order=none
EOF
```

- Install the collection okd-installer (dev mode)

```bash
git clone -b added-provider-digitalocean --recursive \
[email protected]:mtulio/ansible-collection-okd-installer.git \
collections/ansible_collections/mtulio/okd_installer
```

- Install the dependencies

```bash
pip install -r collections/ansible_collections/mtulio/okd_installer/requirements.txt
ansible-galaxy collection install -r collections/ansible_collections/mtulio/okd_installer/requirements.yml
```

- Create and export the DigitalOcean token

> A read and write [Personal Access Token](https://docs.digitalocean.com/reference/api/) for the API. Make sure you write down the token in a safe place; you’ll need it later on in this tutorial.
```bash
export DO_API_TOKEN=value
```

## Setup the configuration

```bash
CLUSTER_NAME=do-lab02
VARS_FILE=./vars-do-ha_${CLUSTER_NAME}.yaml

cat <<EOF > ${VARS_FILE}
provider: do
cluster_name: ${CLUSTER_NAME}
config_cluster_region: nyc3
cluster_profile: ha
destroy_bootstrap: no
config_base_domain: splat-do.devcluster.openshift.com
config_ssh_key: "$(cat ~/.ssh/openshift-dev.pub)"
config_pull_secret_file: "${HOME}/.openshift/pull-secret-latest.json"
config_cluster_version: 4.13.0
version: 4.13.0
# Define the OS Image mirror
os_mirror: true
os_mirror_from: stream_artifacts
os_mirror_stream:
architecture: x86_64
artifact: openstack
format: qcow2.gz
os_mirror_to_provider: do
os_mirror_to_do:
bucket: rhcos-images
image_type: QCOW2
config_patches:
- rm-capi-machines
EOF
```

- install the clients

```bash
ansible-playbook mtulio.okd_installer.install_clients -e @$VARS_FILE
```
- Create the install-config.yaml

```bash
ansible-playbook mtulio.okd_installer.config -e mode=create-config -e @$VARS_FILE
```

> The install-config.yaml will be generated in the path `~/.ansible/okd-installer/clusters/$CLUSTER_NAME/install-config.yaml`, modify it as you want.
- Create the manifests

```bash
ansible-playbook mtulio.okd_installer.config -e mode=create-manifests -e @$VARS_FILE
```

## Install the cluster

Install stack by stack.

### Network stack

```bash
ansible-playbook mtulio.okd_installer.stack_network -e @$VARS_FILE
```

### IAM Stack

N/A


### DNS Stack

```bash
ansible-playbook mtulio.okd_installer.stack_dns -e @$VARS_FILE
```

#### Load Balancer Stack

```bash
ansible-playbook mtulio.okd_installer.stack_loadbalancer -e @$VARS_FILE
```

#### Config Commit

This stage allows the user to modify the cluster configurations (manifests),
then generate the ignition files used to create the cluster.

##### Manifest patches (pre-ign)

```bash
ansible-playbook mtulio.okd_installer.config -e mode=patch-manifests -e @$VARS_FILE
```

##### Config generation (ignitions)

```bash
ansible-playbook mtulio.okd_installer.config -e mode=create-ignitions -e @$VARS_FILE
```

#### Mirror OS boot image

> TODO for DigitalOcean
```bash
ansible-playbook mtulio.okd_installer.os_mirror -e @$VARS_FILE
```

#### Compute Stack

##### Bootstrap node

- Upload the bootstrap ignition to blob and Create the Bootstrap Instance

```bash
ansible-playbook mtulio.okd_installer.create_node -e node_role=bootstrap -e @$VARS_FILE
```

##### Control Plane nodes

- Create the Control Plane nodes

```bash
ansible-playbook mtulio.okd_installer.create_node -e node_role=controlplane -e @$VARS_FILE
```

##### Compute/worker nodes

- Create the Compute nodes

```bash
ansible-playbook mtulio.okd_installer.create_node -e node_role=compute -e @$VARS_FILE
```

- Approve worker nodes' certificates signing requests (CSR)

```bash
oc adm certificate approve $(oc get csr -o json |jq -r '.items[] | select(.status.certificate == null).metadata.name')

# OR

oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
```

## Review the installation

```bash
export KUBECONFIG=${HOME}/.ansible/okd-installer/clusters/${cluster_name}/auth/kubeconfig

oc get nodes
oc get co
```

## Destroy cluster

```bash
ansible-playbook mtulio.okd_installer.destroy_cluster -e @$VARS_FILE
```
2 changes: 1 addition & 1 deletion hack/ci/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
git submodule update --recursive --remote
chdir={{ collection_root }}
register: git_update
failed_when: git_update.stdout != ''
failed_when: git_update.stderr != ''
tags: build

- name: Ensure the ~/.ansible directory exists.
Expand Down
6 changes: 1 addition & 5 deletions mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -110,11 +110,7 @@ nav:
- Installing HA Topology with UPI and Platform Agnostic: guides/AWS/aws-agnostic.md
- Installing SNO with Ephemeral storage: guides/AWS/aws-sno.md
- Installing HA Topology UPI BYO Network: guides/AWS/aws-upi-byo-network.md
# - Digital Ocean: TODO.md
# - Oracle Cloud:
# - Installing HA Topology with UPI and Platform Agnostic: TODO.md
# - Installing HA Topology with UPI and Platform External: TODO.md
# - Installing HA Topology with UPI and Platform External and CSI Driver: TODO.md
- Digital Ocean: guides/DigitalOcean/index.md
#- Examples: TODO.md
- Development:
- development/index.md
Expand Down
17 changes: 17 additions & 0 deletions playbooks/vars/digitalocean/profiles/HighlyAvailable/dns.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---

#AWS: https://docs.ansible.com/ansible/latest/collections/community/aws/route53_module.html
cloud_dns_zones:

# private
- name: "{{ cluster_state.dns.cluster_domain }}"
type: cluster
provider: do
#vpc_name: "{{ cluster_state.infra_id }}-vpc"
vpc_region: "{{ cluster_state.region }}"
#private_zone: yes
project: "{{ cluster_state.infra_id }}"
# records:
# - name: "api.{{ cluster_state.zones.cluster }}"
# value: "lb.{{ cluster_state.zones.cluster }}"
# type: CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# placeholder
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---

cloud_load_balancer_provider: do

#> DO LB is limited the HC by LB, not rule, so it can be a problem
#> when specific service goes down. Recommened is to create one LB by
#> rule with proper health check (not cover here)
cloud_loadbalancers:
- name: "{{ cluster_state.infra_id }}-ext"
openshift_id: public
provider: do
vpc_name: "{{ cluster_state.infra_id }}-vpc"
project: "{{ cluster_state.infra_id }}"
region: "{{ cluster_state.region }}"

redirect_http_to_https: no
size: "lb-small"
#algorithm: round_robin
enable_backend_keepalive: no
enable_proxy_protocol: no

forwarding_rules:
- entry_protocol: tcp
entry_port: 6443
target_protocol: tcp
target_port: 6443
tls_passthrough: false
- entry_protocol: tcp
entry_port: 22623
target_protocol: tcp
target_port: 22623
tls_passthrough: false
- entry_protocol: tcp
entry_port: 80
target_protocol: tcp
target_port: 80
tls_passthrough: false
- entry_protocol: tcp
entry_port: 443
target_protocol: tcp
target_port: 443
tls_passthrough: false
health_check:
check_interval_seconds: 10
healthy_threshold: 5
path: "/healthz"
port: 6443
protocol: "https"
response_timeout_seconds: 5
unhealthy_threshold: 3

register_resources:
- service: dns
domain: "{{ cluster_state.dns.cluster_domain }}"
records:
- name: "lb"
type: A
- name: "api"
value: "lb"
type: CNAME
- name: "api-int"
value: "lb"
type: CNAME
- name: "*.apps"
value: "lb"
type: CNAME
- name: "oauth-openshift.app"
value: "lb"
type: CNAME
62 changes: 62 additions & 0 deletions playbooks/vars/digitalocean/profiles/HighlyAvailable/network.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
################################
# AWS Networks
# AWS us-east-1: 10.0.0.0/16 (to 10.0.255.255/16)
# AWS <unassigned>: 10.23.0.0/16 (to 10.23.255.255/19)

#########################

vpc_cidr: 10.0.0.0/16
vpc_tags:
- "kubernetes.io/cluster/{{ cluster_state.infra_id }}"

# TODO: fix those rules to more restrictive. This is used to dev env.
security_groups:
- name: "{{ cluster_state.infra_id }}-default-fw"
description: Default Firewall
#tags: "{{ vpc_tags }}"
inbound_rules:
- protocol: tcp
ports: "22"
sources:
addresses: ["0.0.0.0/0", "::/0"]
#tags: "{{ vpc_tags }}"
- protocol: "tcp"
ports: "1-65535"
sources:
addresses: ["{{ vpc_cidr }}"]
# tags: "{{ vpc_tags }}"
- protocol: "udp"
ports: "1-65535"
sources:
addresses: ["{{ vpc_cidr }}"]
# tags: "{{ vpc_tags }}"
- protocol: "icmp"
ports: "1-65535"
sources:
addresses: ["{{ vpc_cidr }}"]
# tags: "{{ vpc_tags }}"
outbound_rules:
- protocol: "tcp"
ports: "1-65535"
destinations:
addresses: ["0.0.0.0/0", "::/0"]
- protocol: "udp"
ports: "1-65535"
destinations:
addresses: ["0.0.0.0/0", "::/0"]
- protocol: "icmp"
ports: "1-65535"
destinations:
addresses: ["0.0.0.0/0", "::/0"]

cloud_networks:
- name: "{{ cluster_state.infra_id }}-vpc"
block: "{{ vpc_cidr }}"
provider: do
region: "{{ cluster_state.region }}"
project: "{{ cluster_state.infra_id }}"
project_purpose: "OpenShift Cluster"
project_env: "Development"

security_groups: "{{ security_groups | d([]) }}"
tags: "{{ cluster_state.tags | d({}) }}"
Loading

0 comments on commit 76b962c

Please sign in to comment.