Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configdrive2? How does this work? #37

Open
prologic opened this issue Nov 14, 2020 · 21 comments
Open

configdrive2? How does this work? #37

prologic opened this issue Nov 14, 2020 · 21 comments

Comments

@prologic
Copy link

I set the following flags:

  --proxmoxve-vm-cienabled=1 \
  --proxmoxve-vm-citype=configdrive2 \

And I see the Clouddrive is created and attached to the VM. But most of the values are not filled in.

How do we fill these in when creating the Clouddrive2? I don't see where this is happenning in the code.

@lnxbil
Copy link
Owner

lnxbil commented Nov 18, 2020

Very good question and I don't see that either.

Maybe @travisghansen can help, he implemented it.

@travisghansen
Copy link
Contributor

Create the cloud init settings/drive on your template vm, it will then carry over. The type can change on a vm by vm basis as necessary and settings will still work as defined in the template drive.

@prologic
Copy link
Author

Create the cloud init settings/drive on your template vm, it will then carry over. The type can change on a vm by vm basis as necessary and settings will still work as defined in the template drive.

You kind of missed my point. I shouldn't havent o use a VM template in order for cloudinit to work. In fact it can work without a template. This is how I manage many of my VMs now. I just don't have a way to automate their creation (hence Terraform and this provider).

Can we refactor the code so that I can say, please use a clouddrive, here are the things to shove in it, attach it and create the vm.

Yes? :)

@travisghansen
Copy link
Contributor

travisghansen commented Nov 19, 2020

I suppose anything can be done, but the cloud-init support was intentionally limited to templates in this context.

Can you spell out the use-case in a bit of detail to help me understand what you're doing? You're somehow using terraform with docker-manchine?

EDIT:

Looking at the code more closely, the comment above is incorrect. It's scope in the non-template scenario is to effectively use it as a means of sending the newly minted ssh key for the docker-machine into the VM in scenarios where username/password are not feasible or wanted.

@prologic
Copy link
Author

I suppose anything can be done, but the cloud-init support was intentionally limited to templates in this context.

Good :D I wasn't going nuts then! This is totally doable without "VM Templates" :)

Can you spell out the use-case in a bit of detail to help me understand what you're doing? You're somehow using terraform with docker-manchine?

Ignore the docker-machine part, I was just experimenting with trying to (again) automate my Proxmox VE based Docker Swarm ndoes.

So here's what I want:

  • To be able to spin up nodes in Proxmox VE with Terraform (I can do this with this provider, it works just fine).
  • I also want to be able to provide data that constructs a CloudDrive (as it is called in Proxmox VE) for me by providing at a minimum the following values:
    • IP Address, Subnet, Gateway
    • DNS Servers
    • Root Password
    • SSH PUblic Key

I should not have to go and pre-configure some random VM template for this :)

@prologic
Copy link
Author

For example, uLInux are a bunch of VMs I spin up in Proxmox VE today, but unfortunately I have to build them by hand (unless I use a VM Template). But if I choose to swap out to say RancherOS then I have to go build another VM Template.

We can avoid that step entirely with VMs and OSes that support CloudInit natively :)

@travisghansen
Copy link
Contributor

Can you send over the exact command you’re attempting to use along with specific iso etc for me to test when I get a moment? The code appears like it might be relatively easy to support this but not sure yet.

@travisghansen
Copy link
Contributor

Also, it's kinda hard to ignore the docker machine part since this project is...a docker machine driver. It sounds to me like you're just building pure VMs with terraform..am I missing something? Why is docker machine (and thus this driver) in the picture at all?

Also note, the VM templates are not 'random' but meant to scale more sanely in various fashions vs manually building out machines. If you use VM templates you have potential space savings from using the same base VM, you have a much faster boot/install process (everything is already installed) and is just generally more robust than building essentially one-off VMs.

I'm not exactly sure how these are auto-joining swarm but with rancher/k8s you click a few buttons and scale out the cluster as much as desired..no manual intervention at all.

@prologic
Copy link
Author

Sure I will provide an example.

@prologic
Copy link
Author

prologic commented Nov 19, 2020

So I apolgoize, I was getting mixed up with a Terraform Provider and this project.

I just had a look at the help output of this Docker Machine Driver:

$ dm create -d proxmoxve --help
Usage: docker-machine create [OPTIONS] [arg...]

Create a machine

Description:
   Run 'docker-machine create --driver name --help' to include the create flags for that driver in the help text.

Options:

   --driver, -d "virtualbox"										Driver to create machine with. [$MACHINE_DRIVER]
   --engine-env [--engine-env option --engine-env option]						Specify environment variables to set in the engine
   --engine-insecure-registry [--engine-insecure-registry option --engine-insecure-registry option]	Specify insecure registries to allow with the created engine
   --engine-install-url "https://get.docker.com"							Custom URL to use for engine installation [$MACHINE_DOCKER_INSTALL_URL]
   --engine-label [--engine-label option --engine-label option]						Specify labels for the created engine
   --engine-opt [--engine-opt option --engine-opt option]						Specify arbitrary flags to include with the created engine in the form flag=value
   --engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror option]		Specify registry mirrors to use [$ENGINE_REGISTRY_MIRROR]
   --engine-storage-driver 										Specify a storage driver to use with the engine
   --proxmoxve-debug-driver										enables debugging in the driver [$PROXMOXVE_DEBUG_DRIVER]
   --proxmoxve-debug-resty										enables the resty debugging [$PROXMOXVE_DEBUG_RESTY]
   --proxmoxve-provision-strategy "cdrom"								Provision strategy (cdrom|clone) [$PROXMOXVE_PROVISION_STRATEGY]
   --proxmoxve-proxmox-host "192.168.1.253"								Host to connect to [$PROXMOXVE_PROXMOX_HOST]
   --proxmoxve-proxmox-node 										Node to use (defaults to host) [$PROXMOXVE_PROXMOX_NODE]
   --proxmoxve-proxmox-pool 										pool to attach to [$PROXMOXVE_PROXMOX_POOL]
   --proxmoxve-proxmox-realm "pam"									Realm to connect to (default: pam) [$PROXMOXVE_PROXMOX_REALM]
   --proxmoxve-proxmox-user-name "root"									User to connect as [$PROXMOXVE_PROXMOX_USER_NAME]
   --proxmoxve-proxmox-user-password 									Password to connect with [$PROXMOXVE_PROXMOX_USER_PASSWORD]
   --proxmoxve-ssh-password 										Password to log in to the guest OS (default tcuser for rancheros) [$PROXMOXVE_SSH_PASSWORD]
   --proxmoxve-ssh-port "22"										SSH port in the guest to log in to (defaults to 22) [$PROXMOXVE_SSH_PORT]
   --proxmoxve-ssh-username 										Username to log in to the guest OS (default docker for rancheros) [$PROXMOXVE_SSH_USERNAME]
   --proxmoxve-vm-cienabled 										cloud-init enabled (implied with clone strategy 0=false, 1=true, ''=default) [$PROXMOXVE_VM_CIENABLED]
   --proxmoxve-vm-citype 										cloud-init type (nocloud|configdrive2) [$PROXMOXVE_VM_CITYPE]
   --proxmoxve-vm-clone-full "2"									make a full clone or not (0=false, 1=true, 2=use proxmox default logic [$PROXMOXVE_VM_CLONE_FULL]
   --proxmoxve-vm-clone-vmid 										vmid to clone [$PROXMOXVE_VM_CLONE_VNID]
   --proxmoxve-vm-cpu 											Emulatd CPU [$PROXMOXVE_VM_CPU]
   --proxmoxve-vm-cpu-cores 										number of cpu cores [$PROXMOXVE_VM_CPU_CORES]
   --proxmoxve-vm-cpu-sockets 										number of cpus [$PROXMOXVE_VM_CPU_SOCKETS]
   --proxmoxve-vm-image-file 										storage of the image file (e.g. local:iso/rancheros-proxmoxve-autoformat.iso) [$PROXMOXVE_VM_IMAGE_FILE]
   --proxmoxve-vm-memory "8"										memory in GB [$PROXMOXVE_VM_MEMORY]
   --proxmoxve-vm-net-bridge 										bridge to attach network to [$PROXMOXVE_VM_NET_BRIDGE]
   --proxmoxve-vm-net-firewall 										enable/disable firewall (0=false, 1=true, ''=default) [$PROXMOXVE_VM_NET_FIREWALL]
   --proxmoxve-vm-net-model "virtio"									Net Interface model, default virtio (e1000, virtio, realtek, etc...) [$PROXMOXVE_VM_NET_MODEL]
   --proxmoxve-vm-net-mtu 										set nic mtu (''=default) [$PROXMOXVE_VM_NET_MTU]
   --proxmoxve-vm-net-tag "0"										vlan tag [$PROXMOXVE_VM_NET_TAG]
   --proxmoxve-vm-numa 											enable/disable NUMA [$PROXMOXVE_VM_NUMA]
   --proxmoxve-vm-protection 										protect the VM and disks from removal (0=false, 1=true, ''=default) [$PROXMOXVE_VM_PROTECTION]
   --proxmoxve-vm-scsi-attributes 									scsi0 attributes [$PROXMOXVE_VM_SCSI_ATTRIBUTES]
   --proxmoxve-vm-scsi-controller "virtio-scsi-pci"							scsi controller model (default: virtio-scsi-pci) [$PROXMOXVE_VM_SCSI_CONTROLLER]
   --proxmoxve-vm-start-onboot 										make the VM start automatically onboot (0=false, 1=true, ''=default) [$PROXMOXVE_VM_START_ONBOOT]
   --proxmoxve-vm-storage-path 										storage to create the VM volume on [$PROXMOXVE_VM_STORAGE_PATH]
   --proxmoxve-vm-storage-size "16"									disk size in GB [$PROXMOXVE_VM_STORAGE_SIZE]
   --proxmoxve-vm-storage-type 										storage type to use (QCOW2 or RAW) [$PROXMOXVE_VM_STORAGE_TYPE]
   --proxmoxve-vm-vmid-range 										range of acceptable vmid values <low>[:<high>] [$PROXMOXVE_VM_VMID_RANGE]
   --swarm												Configure Machine to join a Swarm cluster
   --swarm-addr 											addr to advertise for Swarm (default: detect and use the machine IP)
   --swarm-discovery 											Discovery service to use with Swarm
   --swarm-experimental											Enable Swarm experimental features
   --swarm-host "tcp://0.0.0.0:3376"									ip/socket to listen on for Swarm master
   --swarm-image "swarm:latest"										Specify Docker image to use for Swarm [$MACHINE_SWARM_IMAGE]
   --swarm-join-opt [--swarm-join-opt option --swarm-join-opt option]					Define arbitrary flags for Swarm join
   --swarm-master											Configure Machine to be a Swarm master
   --swarm-opt [--swarm-opt option --swarm-opt option]							Define arbitrary flags for Swarm master
   --swarm-strategy "spread"										Define a default scheduling strategy for Swarm
   --tls-san [--tls-san option --tls-san option]							Support extra SANs for TLS certs

And note that there are no options to pass in even the most basic CloudInit parameters.

If we added at least the following:

  • --proxmoxve-vm-cloudinit-ip
  • --proxmoxve-vm-cloudinit-gw
  • --proxmoxve-vm-cloudinit-dns1
  • --proxmoxve-vm-cloudinit-rootpw
  • --proxmoxve-vm-cloudinit-sshkey

And use those to create the CloudDrive and attach it to the VM I think this would be enough to be useful.

@travisghansen
Copy link
Contributor

Can you send an example of what you’re invoking now with the specific distro/iso etc?

@prologic
Copy link
Author

prologic commented Nov 19, 2020

I'm basically using a modified script from the README:

$ cat bin/create-docker-node
#!/bin/sh

set -ex

PVE_NODE="vz1"
PVE_HOST="****"

PVE_USER="root"
PVE_REALM="pam"
PVE_PASSWD="****"

PVE_STORAGE_NAME="zfs"
PVE_STORAGE_SIZE="4"

SSH_USERNAME="rahcner"
SSH_PASSWORD="rancher"

PVE_MEMORY=2
PVE_CPU_CORES=1
PVE_IMAGE_FILE="nfs:iso/rancheros-proxmoxve-autoformat.iso"
VM_NAME="${1}"

docker-machine rm --force $VM_NAME > /dev/null 2>&1 || true

docker-machine --debug \
  create \
  --driver proxmoxve \
  --proxmoxve-proxmox-host $PVE_HOST \
  --proxmoxve-proxmox-node $PVE_NODE \
  --proxmoxve-proxmox-user-name $PVE_USER \
  --proxmoxve-proxmox-user-password $PVE_PASSWD \
  --proxmoxve-proxmox-realm $PVE_REALM \
  \
  --proxmoxve-vm-cienabled=1 \
  --proxmoxve-vm-citype=configdrive2 \
  --proxmoxve-vm-storage-path $PVE_STORAGE_NAME \
  --proxmoxve-vm-storage-size $PVE_STORAGE_SIZE \
  --proxmoxve-vm-cpu-cores $PVE_CPU_CORES \
  --proxmoxve-vm-memory $PVE_MEMORY \
  --proxmoxve-vm-image-file "$PVE_IMAGE_FILE" \
  \
  --proxmoxve-ssh-username $SSH_USERNAME \
  --proxmoxve-ssh-password $SSH_PASSWORD \
  \
  --proxmoxve-debug-resty \
  --proxmoxve-debug-driver \
  \
  $VM_NAME

But as this lacks any way to actually create the CloudDrive, this won't work.

@travisghansen
Copy link
Contributor

The above does create a cloud drive and adds the machine ssh key to it right? The iso may not have cloud-init installed however..

@prologic
Copy link
Author

It was blank for me? 🤔

@travisghansen
Copy link
Contributor

OK, I've prototyped this up but it appears rancheros simply ignores all the cloud-init values besides the ssh key anyhow. Got another distro you want me to try out instead of rancheros?

@prologic
Copy link
Author

OK, I've prototyped this up but it appears rancheros simply ignores all the cloud-init values besides the ssh key anyhow. Got another distro you want me to try out instead of rancheros?

Yes!

uLinux respects the following:

  • Password
  • SSH Key
  • IP Address
  • DNS Servers
  • and a few more I forget

😀

You can either download an ISO from the release page or build from source.

@travisghansen
Copy link
Contributor

@prologic I'm not sure this ulinux iso is prepped to do what needs to be done. I'm just booting manually at this point but a few things to note:

  • ip settings are being ignored by cloud-init (I didn't try nameservers, user, or search domain)
  • qemu-guest-agent is required
  • I'll need a script that will install docker (or ignore it if it's already installed)
  • all of the above should persist reboots

Generically docker-machine allows you to tell it a URL of an install script (--engine-install-url) which will ensure docker is installed. The sequence of events is:

  • docker-machine provisions a node and ensures ssh access
  • once ssh access has been established it invokes the engine-install-url via ssh on the target node (I think at least bash and curl are required but I'll need to confirm that exactly)
    • generally these scripts test for docker already present and if so exit immediately, otherwise they install docker
  • once docker is confirmed running etc then machine configures the certs/etc over ssh as well

In the case of this project, qemu-guest-agent must be running to determine what IP has been allocated to the machine in order to proceed with ssh commands.

Regarding the script, I would recommend you host it on your project site (ie: github) since you know the project best and can update as appropriate.

@prologic
Copy link
Author

In the case of this project, qemu-guest-agent must be running to determine what IP has been allocated to the machine in order to proceed with ssh commands.

This is simply not true. I do not have the guest agent on any of my VMs in Proxmox VE.

It's unfortunately cloudinit wasn't working for you, it does for me :) I'll look at this more closely later when I have more time.

@travisghansen
Copy link
Contributor

@prologic Are any of your vms working with docker machine? I didn’t say proxmox generally requires the agent, but images using this integration do. It’s how the machine driver discovers the ip associated with the newly created vm. Without it this driver simply has no way to determine what IP can be used for connecting to the vm.

@prologic
Copy link
Author

Yes all of them using Yuen generic driver

@travisghansen
Copy link
Contributor

Ok, not sure that is but seems unrelated to this project. In any case, agent is required for this project to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants