Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idempotence issues in "proxmox_virtual_environment_vm" after cloning from another VM #1299

Open
hamannju opened this issue May 15, 2024 · 4 comments
Labels

Comments

@hamannju
Copy link

hamannju commented May 15, 2024

Hello. So I have another idempotency issue. I create virtual machines by cloning another VM in order to be able to preconfigure a lot of common settings. This works mostly. But if I run terraform plan after applying my configuration it proposes to essentially delete a lot of configuration where blocks are involved. Interestingly this does not apply to disks.

This is the resource definition:

resource "proxmox_virtual_environment_vm" "worker_node" {
  count     = var.worker_node_count
  name      = "k8s-${var.environment_name}-worker-${count.index + 1}"
  pool_id   = var.pool_name
  node_name = var.pve_node_name
  cpu {
    cores = var.vm_config["worker_node"].cores
    type  = var.vm_config["worker_node"].cpu_type
  }
  clone {
    vm_id = var.vm_config["worker_node"].template_id
  }
  memory {
    dedicated = var.vm_config["worker_node"].memory
  }
  stop_on_destroy = true
  depends_on = [proxmox_virtual_environment_pool.k8s_pool]
}

And this is the corresponding output from terraform plan:

module.k8s_dev.proxmox_virtual_environment_vm.worker_node[1] will be updated in-place
  ~ resource "proxmox_virtual_environment_vm" "worker_node" {
        id                      = "128"
      ~ ipv4_addresses          = [
          - [
              - "127.0.0.1",
            ],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [
              - "10.42.101.226",
            ],
          - [],
        ] -> (known after apply)
      ~ ipv6_addresses          = [
          - [
              - "::1",
            ],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [],
          - [
              - "fe80::be24:11ff:fed2:ebc6",
            ],
          - [
              - "fe80::be24:11ff:fe9b:e1eb",
            ],
        ] -> (known after apply)
        name                    = "k8s-dev-worker-2"
      ~ network_interface_names = [
          - "lo",
          - "bond0",
          - "dummy0",
          - "teql0",
          - "tunl0",
          - "sit0",
          - "ip6tnl0",
          - "enxbc2411d2ebc6",
          - "enxbc24119be1eb",
        ] -> (known after apply)
        # (24 unchanged attributes hidden)

      - network_device {
          - bridge       = "vmbr3" -> null
          - disconnected = false -> null
          - enabled      = true -> null
          - firewall     = true -> null
          - mac_address  = "BC:24:11:D2:EB:C6" -> null
          - model        = "virtio" -> null
          - mtu          = 0 -> null
          - queues       = 0 -> null
          - rate_limit   = 0 -> null
          - vlan_id      = 0 -> null
            # (1 unchanged attribute hidden)
        }
      - network_device {
          - bridge       = "vmbr5" -> null
          - disconnected = false -> null
          - enabled      = true -> null
          - firewall     = true -> null
          - mac_address  = "BC:24:11:9B:E1:EB" -> null
          - model        = "virtio" -> null
          - mtu          = 1 -> null
          - queues       = 0 -> null
          - rate_limit   = 0 -> null
          - vlan_id      = 0 -> null
            # (1 unchanged attribute hidden)
        }

      - vga {
          - enabled = true -> null
          - memory  = 0 -> null
          - type    = "qxl" -> null
        }

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 5 to change, 0 to destroy.

I assume that this is an issue that falls under the umbrella of the clone VM issues which you seem to be addressing elsewhere. I only found it interesting that terraform is not trying to delete the disks while proposing to delete essentially all networking configuration and the graphics configuration.

@hamannju hamannju added the 🐛 bug Something isn't working label May 15, 2024
@bpg
Copy link
Owner

bpg commented May 16, 2024

Indeed, this is the same common problem with clone. The VM's resources are copied and stored in the provider state, but are missing from the plan. So, on the next apply TF is trying to reconcile the plan with the state, sees that state has lots of "extra" bits, and then trying to remove them. #1231 will solve this, but it's still quite far from completion.

@hamannju
Copy link
Author

I contributed some coffees to keep you going.

@bpg
Copy link
Owner

bpg commented May 17, 2024

Thanks a lot!❤️

@bpg-autobot
Copy link
Contributor

bpg-autobot bot commented Nov 14, 2024

Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@bpg-autobot bpg-autobot bot added the stale label Nov 14, 2024
@bpg-autobot bpg-autobot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 14, 2024
@github-project-automation github-project-automation bot moved this from 📥 Inbox to ✅ Done in terraform-provider-proxmox Dec 14, 2024
@bpg bpg added acknowledged and removed stale labels Dec 14, 2024
@bpg bpg reopened this Dec 14, 2024
@github-project-automation github-project-automation bot moved this from ✅ Done to 📥 Inbox in terraform-provider-proxmox Dec 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 📥 Inbox
Development

No branches or pull requests

2 participants