Skip to content

Ansible collection to deploy a VM farm and to provision the virtual machines with custom tasks.

License

Notifications You must be signed in to change notification settings

jjak0b/ansible_farm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

jjak0b.ansible_farm Collection

An ansible collection used to create a farm of virtual machines and control these with tasks categorized in different provision phases orchestrated by Ansible.

  • What: Create a farm composed by multiple VM for different platform and targets

  • Where: into any personal or professional available hypervisor hosts of a network

  • Why: to create a CI/CD or any use case workflow of a single or multiple projects using the same farm structure

  • How: by definiting some repeatable deploy jobs to create a VM and provision it with some repeatable CI/CD and testing jobs

  • When: anytime a developer, a webhook or any type of periodic job request it

This collection allow "What", "Where", "Why", "How" according to your project needs. The "When" is up to you :-) .

The roles of this collection focus on:

  • Create VM definitions from different configuration files separating platform-specific data and target-specific data
  • Provision an hypervisor host with VMs of different VM definitions
  • Provision a VM (guest) host with repeateable and cachable provision phases by using snapshots
  • Don't require root privileges whenever possible and for the use main use cases of this collection don't require root privileges by default.

The main use case for this collection is to create a VM farm made of different platforms and targets to distribute repeateable and cachable provision phases over these like project builds and testing scripts.

This collection is flexible so you can use the features it provide also for other purposes.

All roles of this collection uses some common terms:

  • The VM configuration is a convenient object which describes a permutations of all platforms and targets pairs that we need as virtual machines and each of those are going to be generated in the form of VM definition object after parsing some platform and target definitions files.
  • The VM definition object describes the characteristics of a VM about the hardware it emulates, the firmware or OS that is installed onto it and other parameters like credentials and network host configuration. It's a combination of platform and target definitions.
  • The platform definition (synonym of OS) is the definition of the used OS, disks, network, credentials, VM components that may be required by the OS. This specify also how a resource should be processed and installed later into libvirt.
  • The target definition (synonym of machine, architecture of machine) is the definition of emulated hardware components in each VM definition, like CPU, RAM, machine type, emulator, etc ...

The Hypervisor and VM provisioning

the VMs configurations should be defined in the hypervisors inventory and these configurations vars should be then provided as input of the parse_vms_definitions role such that it generates the VM definitions items in a virtual_machines list.

These VM definitions should be prepared with the init_vm_connection role first, installed using the kvm_provision role later and provisioned using the guest_provision role finally

Standard Usage and How it works

  • For each host of hypervisors

    • ( optional: Assign VMs configurations to the hypervisor host using roles/vm_dispatcher )
    • Provide all VMs configurations as input of roles/parse_vms_definitions
      • This will generate a list of VM Definition items called virtual_machines
    • Each vm item of virtual_machines should be provided as input of:
      • roles/init_vm_connection
        • to add a new ansible inventory host entry to the global inventory
        • to configure the connection to allow the controller to connect to the VM
          • Eventually defining a libvirt network and a DHCP entry for connection such that the vm should be connected to it
        • After that
          • each VM host is added as ansible inventory host
          • the VM definition is added as vm inventory host var
          • the hypervisor's inventory_hostname is added as kvm_host inventory host var to keep a reference of its hypervisor node.
          • Each VM host are added to the following ansible groups:
            • vms
            • "{{ vm.metadata.name }}"
            • "{{ vm.metadata.platform_name }}"
            • "{{ vm.metadata.target_name }}"
  • For each host in vms should run:

    • Delegated roles/kvm_provision to kvm_host, to define and install the VM definition stored in vm inventory host var
    • roles/guest_provision to provision the VM with the guest lifecycle

The VM Guest lifecycle

The lifecycle of the provisioned VM runs the following workflow:

  1. Startup

    • Start the VM
    • Wait until connection is ready
  2. Init use case phase

    • Restore to a 'init' snapshot if exists

    • otherwise fallback to restore or create a 'clean' snapshot and run the init phase:

      1. dependencies pre-phase
        • Run dependencies tasks ({{ import_path }}/dependencies.yaml)
      2. use case phase:
        • Run init tasks {{ import_path }}/init.yaml
        • Create 'init' snapshot
  3. Main use case phase:

    • Run main tasks {{ import_path }}/main.yaml
  4. Terminate use case phase:

    • Run end tasks {{ import_path }}/terminate.yaml whether the main phase succeeds or fails
  5. shutdown

    • Shutdown gracefully first the VM, otherwise force it

Where import_path is a subpath that match with the most detailited phase file location, according to the target and platform type of the VM. The import_path is the one in the following priority list path which contains a phase file:

  • "{{ ( phases_lookup_dir_path, vm.metadata.platform_name, vm.metadata.target_name| path_join }}"
  • "{{ ( phases_lookup_dir_path, vm.metadata.platform_name ) | path_join }}"
  • "{{ phases_lookup_dir_path }}"

Why: A use case may needs specific tasks/vars for a target on platform or only platform; for instance:

  • debian_11 folder (vm.metadata.platform_name value in platforms/debian_sid.yml)
    • amd64 folder (`vm.metadata.target_namevalue )
      • tasks or vars files, ... specific for amd64 targets in debian_11 platforms
    • arm64 folder (`vm.metadata.target_namevalue )
      • tasks or vars files, ... specific for arm64 targets in debian_11 platforms
  • fedora_36 folder (vm.metadata.platform_name value )
    • amd64 folder (`vm.metadata.target_namevalue )
      • tasks or vars files, ... specific for amd64 targets in fedora_36 platforms
    • tasks or vars files, ... specific fedora_36 platforms but any target
  • tasks or vars files, ... generic for any platform and target which file does not exists with a specific import_path sub path

The import_path is useful when some dependencies have different alias in some platform's packets manager, or user needs "ad hoc" tasks/vars for some others use cases.

Support and Requirements

Read the documentation of each role for specific role's requirements. The following tables shows support and requirements for the full collection.

  • Required requirements are minimal and allow the collection to work but you will need at least some of recommended requirements to use in most cases
  • Recommended requirements are used inside some builtin templates, target definitions and callback-tasks for common use cases

Ansible controller host

Platform Support Tested Requirements
Any GNU/Linux distribution should work if ansible support them
  • ansible >=2.10.8
  • python >=2.6
  • sshpass
Debian 11, 12 yes yes

Hypervisor target host:

Platform Support Tested Requirements
Any GNU/Linux distribution and others should work if libvirt and an hypervisor driver is supported partial (No SELinux)
  • Required
    • Ansible managed node requirements
    • Configured libvirt environment
    • Configured SSH server
    • Configured Hypervisor compatible with libvirt.

      Note: Only builtin templates and target definitions use KVM or QEMU so you can use the hypervisor you want and override the builtin ones if needed.

  • Recommended hypervisors
    • KVM
    • QEMU
Required Commands
  • virsh
  • qemu-img (external snapshots only)
Recommended Commands
  • virt-sysprep
  • qemu-system-<arch>
  • qemu-img
  • unzip
  • gzip
  • bunzip2
  • xz
Debian 11, 12 yes yes
Required Packages
  • libvirt-daemon-system
  • python3-libvirt
  • python3-lxml
  • libvirt-clients
  • qemu-utils
Recommended Packages
  • libguestfs-tools
  • qemu-kvm
  • qemu-utils
  • unzip
  • bzip2
  • gzip
  • xz-utils
Ubuntu 22.04 LTS should work no
Arch Linux should work no
Required Packages
  • libvirt
  • libvirt-python
  • python-lxml
  • qemu-img
Recommended Packages
  • guestfs-tools
  • qemu
  • qemu-img
  • unzip
  • bzip2
  • gzip
  • xz

Note: QEMU or KVM are recommended for the following reasons:

  • The collection support qemu:///session URI by default only when the VDE and user (userspace connections) virtual networks types are supported by the hypervisor.
  • Since libvirt 9.0.0 the support of passt as network interface backend for userspace connections has been added but it's unstable, and so the VM template will use the network type user with that backend since libvirt 9.2.0 only.
  • Prior libvirt 9.2.0 The VDE and user virtual networks are supported only when custom network interface can be added via the XML libvirt template through the libvirt QEMU namespace for now. So other hypervisors may require specific extra configuration like definining other VM XML template.
  • the SSH connection plugin support may be achieved with qemu:///session only when SSH port of the VM is reachable from the hypervisor. The user network interface built with the QEMU namespace allow to specify a port forward with the hostfwd option or alternativelly using port forward with passt; but the first one is not supported by libvirt XML format for other hypervisors, and the second one is not supported on libvirt versions prior than 9.2.0.
  • the community.libvirt.libvirt_qemu connection plugin is supported only for local (ansible controller) hypervisor and only if you pre-install the QEMU Guest Agent on VM OS. The use of ssh+qemu URIs has not been tested.

VM target host:

There are very few particular requirements for the platform used for virtual machines

Platform Support Tested Requirements
GNU/Linux based OS yes yes (No SELinux)
  • Ansible managed node requirements
    • Configured SSH server (preinstalled in VM image)
  • Support for /bin/sync
Mac OS yes no
Others Unix-like OS should work no
Windows yes no
  • Ansible managed node requirements
    • Configured SSH server (preinstalled in VM image)
  • Requirements for Sync executable

Dependencies

The following collections are ansible dependencies of this collection's roles and can be installed with ansible-galaxy install -r requirements.yml