diff --git a/README.md b/README.md index e9c60a3..d79da16 100644 --- a/README.md +++ b/README.md @@ -87,14 +87,29 @@ An empty disk on each server, for 'local storage' : - /dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_incus_disk3 +### What has to be tweaked to deploy on 'real' hardware : + One can edit hosts.yaml.example to fit his needs : -First of all, use uuidgen to generate a unique UUID for your deployment, and remplace it in hosts.yaml : +### Global Vars : - ceph_fsid: "e2850e1f-7aab-472e-b6b1-824e19a75071" +First of all, use uuidgen to generate a unique UUID for your deployment, and remplace it in hosts.yaml : Must be changed and be unique per deployment. + - ceph_rbd_cache / ceph_rbd_cache_max / ceph_rbd_cache_target + +RBD Cache settings can be modified here, here is [the documentation.](https://docs.ceph.com/en/reef/rbd/rbd-config-ref/) + + - incus_name: + +Must be changed, it's the actual name of the cluster you create. + + - ovn_name: + +Can be changed, it's the name of the OVN network deployed across servers. + ### Baremetal vars : - ansible_connection: incus @@ -118,6 +133,10 @@ Depending on your setup, you should consider changing this. Depending on your setup, could be 'true', and will default to "sudo" privilege escalation. + - ansible_incus_project: dev-incus-deploy + +Must be commented out, if you're deploying on hardware, using SSH connection. + ### incus_init vars : - network: LOCAL: parent: enp5s0 / network: UPLINK: parent: enp6s0