Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating VMs to new hosts labels disks as 'deleted' #160

Open
jwealthdale opened this issue Mar 25, 2024 · 1 comment
Open

Migrating VMs to new hosts labels disks as 'deleted' #160

jwealthdale opened this issue Mar 25, 2024 · 1 comment

Comments

@jwealthdale
Copy link

We've just added new hosts to our existing VMware cluster running ESXi 8.0. Migrating existing VMs from the old hosts to the new hosts and importing TF resources back into state causes disk to be labelled as 'deleted' and new disks are created on apply. The underlying datastores are the same.

Versions

Terraform: 1.7.5
vSphere Provider: 2.7.0

# module.xxx.vsphere_virtual_machine.vm[0] will be updated in-place
  ~ resource "vsphere_virtual_machine" "vm" {
      ~ host_system_id                          = "xxx" -> (known after apply)
        id                                      = "xxx"
      + ignored_guest_ips                       = []
      ~ imported                                = true -> false
        name                                    = "xxx"
      ~ resource_pool_id                        = "xxx" -> "xxx"
        tags                                    = []
        # (68 unchanged attributes hidden)
 
      + clone {
          + linked_clone  = false
          + template_uuid = "xxx"
          + timeout       = 30
 
          + customize {
              + dns_server_list = [
                  + "x.x.x.x",
                  + "x.x.x.x",
                ]
              + ipv4_gateway    = "x.x.x.x"
              + timeout         = 10
 
              + network_interface {
                  + ipv4_address = "x.x.x.x"
                  + ipv4_netmask = 24
                }
 
              + windows_options {
                  + admin_password        = (sensitive value)
                  + auto_logon            = true
                  + auto_logon_count      = 2
                  + computer_name         = "xxx"
                  + domain_admin_password = (sensitive value)
                  + domain_admin_user     = "xxx"
                  + full_name             = "Administrator"
                  + join_domain           = "xxx"
                  + organization_name     = "xxx"
                  + run_once_command_list = []
                  + time_zone             = xxx
                }
            }
        }
 
      ~ disk {
          ~ label            = "Hard disk 1" -> "<deleted>"
            # (19 unchanged attributes hidden)
        }
      + disk {
          + attach           = false
          + controller_type  = "scsi"
          + datastore_id     = "xxx"
          + disk_mode        = "persistent"
          + disk_sharing     = "sharingNone"
          + eagerly_scrub    = false
          + io_limit         = -1
          + io_reservation   = 0
          + io_share_count   = 0
          + io_share_level   = "normal"
          + keep_on_remove   = false
          + key              = 0
          + label            = "disk0"
          + size             = 100
          + thin_provisioned = false
          + unit_number      = 0
          + write_through    = false
        }
 
        # (2 unchanged blocks hidden)
    }

As a workaround we've had to ignore changes to disk.

@damnsam
Copy link
Contributor

damnsam commented Mar 25, 2024

I had that happen too, usually it means that the disk attributes changed so much between states that it thinks it's all new. Whenever I have a VM move to a new vCenter/datacenter/cluster/etc, I would do the following:

  • update the terraform files to reflect the new VM's config
  • backup the tfstate (just in case), either clear out or remove the offending resource out of the tfstate (depending if you have one tfstate for the resource or a massive tfstate for all resources)
  • re-import the VM (terraform import 'module.xxx.vsphere_virtual_machine.vm[0]' '//NewDatacenter/vm/VMFolder(s)/xxx'
  • re-apply the plan (should only update clone section and each disk's keep_on_remove attribute

I would test this with a fresh local tfstate and test vm to get comfortable with this process before affecting live resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants