Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inconsistently changing or eliminating the compute resources on zones #417

Closed
rnelson0 opened this issue Oct 28, 2021 · 3 comments
Closed
Labels
bug Bug

Comments

@rnelson0
Copy link
Contributor

rnelson0 commented Oct 28, 2021

vRA Version

vRA 8.4.2 on-prem

Terraform Version

1.0.5

vRA Terraform Provider Version

0.4.0 - the problem exists
0.3.11 - the problem does not exist, but the changes caused by the newer version are not reverted (manual remediation required)

Affected Resource(s)

  • vra_zone

Terraform Configuration Files

Each zone is created in a config like this. The cloud account bedford_vcenter supports two zones, Bedford and Belfast, where all others support a single zone. In vRA itself, the zone's Compute tab is set to Include all unassigned compute, rather than a specific list of compute resources.

# vra_cloud_account_vmc.bedford_vcenter:
resource "vra_cloud_account_vmc" "bedford_vcenter" {
  accept_self_signed_cert = var.insecure
  description             = "Label: \"Bedford\""
  api_token               = var.refresh_token
  name                    = "Bedford vCenter"
  regions = [
    "Datacenter:datacenter-58",
    "Datacenter:datacenter-488",
  ]
  vcenter_hostname = var.bedford_hostname
  vcenter_password = var.vcenter_password
  vcenter_username = var.vcenter_username
  nsx_hostname     = ""
  sddc_name        = ""

  lifecycle {
    ignore_changes = [
      api_token,
    ]
  }
}

# vra_region.bedford_region
data "vra_region" "bedford_region" {
  cloud_account_id = resource.vra_cloud_account_vmc.bedford_vcenter.id
  region = "Datacenter:datacenter-58"
}

# vra_zone.bedford_zone:
resource "vra_zone" "bedford_zone" {
  description      = ""
  folder           = "IaaS/EXSRE/vRA Deploys"
  name             = "Bedford vCenter / Bedford"
  placement_policy = "DEFAULT"
  region_id        = data.vra_region.bedford_region.id

  tags {
    key   = "dev-vra"
    value = "target"
  }
  tags {
    key   = "vcenter"
    value = "Bedford"
  }

  lifecycle {
    ignore_changes = [
      links,
    ]
  }
}

# vra_region.belfast_region
data "vra_region" "belfast_region" {
  cloud_account_id = resource.vra_cloud_account_vmc.bedford_vcenter.id
  region = "Datacenter:datacenter-488"
}

# vra_zone.belfast_zone:
resource "vra_zone" "belfast_zone" {
  description      = "Belfast datacenter zone"
  folder           = "IaaS/EXSRE/vRA Deploys"
  name             = "Bedford vCenter / Belfast"
  placement_policy = "DEFAULT"
  region_id        = data.vra_region.belfast_region.id

  tags {
    key   = "vcenter"
    value = "Belfast"
  }

  lifecycle {
    ignore_changes = [
      links,
    ]
  }
}

Expected Behavior

We did not make any changes to the zones when the provider version was increased, so expected only meta-data changes (updated_at on the vra_cloud_accounts_vmc resources, etc)

Actual Behavior

In addition to the expected meta-data changes, the changes made outside of Terraform section listed numerous compute_ids for each zone during planning mode. As our pipeline is automated, it then went into apply mode and made a DIFFERENT change to the custom_properties (not set in our code) that resulted in the Compute setting for zones to change to Manually select compute in some cases, and in others not to make changes. Multiple runs of terraform apply had different results of which zones it did and did not change, eventually converging after multiple runs of all zones being set to Manually select compute. As no compute had been manually specified, each zone had 0 compute resources and all deployments to the zones would immediately fail.

The code and output are at https://gist.github.com/rnelson0/152ab1530569f12916fb16c2a8e5144c. Our pipeline can be a little chatty because it runs these steps:

  • terraform init
  • terraform plan
  • terraform apply -refresh-only
  • terraform apply
    I've only snipped out some auth information and modified git URLs, everything else is as is. 0.4.0 jenkins output shows the applied changes when the compute resources went poof. 0.3.11 jenkins output is after changing the zones back to unallocated resources and restricting provider versions to >~ 0.3.8.

I do see the issue for compute_ids being names instead of ids (PR405), but that issue does seem somewhat tangential to the issue as we aren't trying to specify ids but include all unassigned compute. There does not appear to be any provider setting to match that setting inside the web UI.

We are also affected by #358, you'll notice the region data updates on the vra_cloud_account_vmc and vra_zones every time but this is a known issue.

Steps to Reproduce

  1. Create zones that are set to Include all unassigned compute
  2. Upgrade the terraform provider 0.4.0
  3. terraform apply
  4. Check the zone in the web UI and it is set to Manually specify compute (this was inconsistent, sometimes it took 2 runs to change this setting, but the terraform output was the same both times)
@rnelson0 rnelson0 added bug Bug needs-triage Needs Triage labels Oct 28, 2021
@rnelson0
Copy link
Contributor Author

Reviewing https://registry.terraform.io/providers/vmware/vra/latest/docs/resources/vra_zone does not appear to show a setting for the Compute WebUI setting. If there's a parameter we should explicitly set, that would be great to call out in the docs.

@tenthirtyam
Copy link
Contributor

Hey, Rob! Are you seeing this issue with newer versions as well?

@tenthirtyam tenthirtyam changed the title Provider 0.4.0 is inconsistently changing or eliminating the compute resources on zones inconsistently changing or eliminating the compute resources on zones Jul 10, 2024
@rnelson0
Copy link
Contributor Author

I cannot test this anymore - we switched to assigning compute based on tags, as well as moving to provider 0.7.2. GIven the age I will close it out and open a new one if we see something similar again.

@tenthirtyam tenthirtyam removed the needs-triage Needs Triage label Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bug
Projects
None yet
Development

No branches or pull requests

2 participants