Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proxmox_virtual_environment_network_linux_... disable direct appliement #1637

Open
NiclasPe opened this issue Nov 14, 2024 · 3 comments
Open
Labels
🐛 bug Something isn't working ⌛ pending author's response Requested additional information from the reporter

Comments

@NiclasPe
Copy link

Describe the bug
I wanna create all VLANS and Bridges via Terraform to keep it consistent over multiple Nodes, but if there are a few of this resources, Terraform will fail due to the netplan apply for each resource. It would be great if there is a option to disable the direct apply in the resource and apply all changes at the end of the script.

To Reproduce
Steps to reproduce the behavior:

  1. Create many resources 'proxmox_virtual_environment_network_linux_bridge' maybe with vlan dependencies
    -> Didn't happens every time, but if you use 10 of them over 2 nodes most of the time
  2. Run 'terraform apply'
  3. See error

Please also provide a minimal Terraform configuration that reproduces the issue.

erraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "0.66.2"
    }
  }
}


provider "proxmox" {
  endpoint  = "https://10.10.2.20:8006/"
  api_token = "root@pam!terraform=abc"
  insecure  = true
  ssh {
    agent    = true
    username = "root"
    node {
      name    = "pve1"
      address = "10.10.2.23"
    }
    node {
      name    = "pve2"
      address = "10.10.2.21"
    }
    node {
      name    = "pve3"
      address = "10.10.2.22"
    }
  }
}


#### Locals ####

locals {
  nodes = toset([
    "pve1",
    "pve2",
    "pve3"
  ])
}

#### Resources ####

# VLAN
resource "proxmox_virtual_environment_network_linux_vlan" "bond0_10" {
  for_each  = local.nodes
  node_name = each.value

  name    = "bond0.10"
  comment = "VLAN - Guests"
}

# Bridge
resource "proxmox_virtual_environment_network_linux_bridge" "vmbr10" {
  depends_on = [
    proxmox_virtual_environment_network_linux_vlan.bond0_10
  ]

  for_each  = local.nodes
  node_name = each.value

  name    = "vmbr10"
  comment = "Network - Guests"
  ports = [
    "bond0.10"
  ]
}

# ---------------------------------------------------------------------------------------------

# VLAN
resource "proxmox_virtual_environment_network_linux_vlan" "bond0_11" {
  for_each  = local.nodes
  node_name = each.value

  name    = "bond0.11"
  comment = "VLAN - LAN"
}

# Bridge
resource "proxmox_virtual_environment_network_linux_bridge" "vmbr0" {
  depends_on = [
    proxmox_virtual_environment_network_linux_vlan.bond0_11
  ]

  for_each  = local.nodes
  node_name = each.value

  name    = "vmbr0"
  comment = "Network - LAN"
  ports = [
    "bond0.11"
  ]
}

...

Expected behavior
Runs without timeout failures

Additional context
Happens because of the direct apply for each resource. If there is a possibility to disable it, it would run smooth.

  • Clustered Proxmox:
  • Proxmox version: latest
  • Provider version (ideally it should be the latest version): 0.66.2
  • Terraform/OpenTofu version: v1.5.7
  • OS (where you run Terraform/OpenTofu from): MacOS or Linux
@NiclasPe NiclasPe added the 🐛 bug Something isn't working label Nov 14, 2024
@NiclasPe NiclasPe changed the title proxmox_virtual_environment_network_linux_... disable direct applyment proxmox_virtual_environment_network_linux_... disable direct appliement Nov 14, 2024
@bpg
Copy link
Owner

bpg commented Nov 14, 2024

Hi @NiclasPe 👋
What's an error do you see?

@NiclasPe
Copy link
Author

@bpg a API timeout:

"proxmox_virtual_environment_network_linux_vlan.bond0_4004["destiny"]: Creating...
proxmox_virtual_environment_network_linux_vlan.bond0_4004["harmony"]: Creating...
proxmox_virtual_environment_network_linux_vlan.bond0_4002["harmony"]: Modifying... [id=harmony:bond0.4002]
proxmox_virtual_environment_network_linux_vlan.bond0_4002["destiny"]: Modifying... [id=destiny:bond0.4002]
proxmox_virtual_environment_network_linux_vlan.bond0_4003["destiny"]: Modifying... [id=destiny:bond0.4003]
proxmox_virtual_environment_network_linux_vlan.bond0_4003["harmony"]: Modifying... [id=harmony:bond0.4003]
proxmox_virtual_environment_network_linux_vlan.bond0_4001["destiny"]: Modifying... [id=destiny:bond0.4001]
proxmox_virtual_environment_network_linux_vlan.bond0_4001["harmony"]: Modifying... [id=harmony:bond0.4001]
proxmox_virtual_environment_network_linux_vlan.bond0_4004["harmony"]: Creation complete after 4s [id=harmony:bond0.4004]
proxmox_virtual_environment_network_linux_vlan.bond0_4003["harmony"]: Modifications complete after 7s [id=harmony:bond0.4003]
proxmox_virtual_environment_network_linux_vlan.bond0_4004["destiny"]: Still creating... [10s elapsed]
proxmox_virtual_environment_network_linux_vlan.bond0_4002["destiny"]: Still modifying... [id=destiny:bond0.4002, 10s elapsed]
proxmox_virtual_environment_network_linux_vlan.bond0_4002["harmony"]: Still modifying... [id=harmony:bond0.4002, 10s elapsed]
proxmox_virtual_environment_network_linux_vlan.bond0_4003["destiny"]: Still modifying... [id=destiny:bond0.4003, 10s elapsed]
proxmox_virtual_environment_network_linux_vlan.bond0_4001["destiny"]: Still modifying... [id=destiny:bond0.4001, 10s elapsed]
proxmox_virtual_environment_network_linux_vlan.bond0_4001["harmony"]: Still modifying... [id=harmony:bond0.4001, 10s elapsed]
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4001["harmony"],
│   on main.tf line 43, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4001":
│   43: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4001" {
│ 
│ Could not reload network configuration on node 'harmony', unexpected error: failed to reload network configuration for node "harmony": All attempts fail:
│ #1: failed to perform HTTP PUT request (path: nodes/harmony/network) - Reason: Put "https://x.x.x.x:8006/api2/json/nodes/harmony/network": context deadline exceeded
╵
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4001["destiny"],
│   on main.tf line 43, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4001":
│   43: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4001" {
│ 
│ Could not reload network configuration on node 'destiny', unexpected error: failed to reload network configuration for node "destiny": All attempts fail:
│ #1: failed to perform HTTP PUT request (path: nodes/destiny/network) - Reason: Put "https://x.x.x.x:8006/api2/json/nodes/destiny/network": context deadline exceeded
╵
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4002["harmony"],
│   on main.tf line 71, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4002":
│   71: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4002" {
│ 
│ Could not reload network configuration on node 'harmony', unexpected error: failed to reload network configuration for node "harmony": All attempts fail:
│ #1: timeout while waiting for task "UPID:harmony:00147AB9:00CD3A63:673B2DD2:srvreload:networking:root@pam!terraform:" to complete
╵
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4002["destiny"],
│   on main.tf line 71, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4002":
│   71: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4002" {
│ 
│ Could not reload network configuration on node 'destiny', unexpected error: failed to reload network configuration for node "destiny": All attempts fail:
│ #1: failed to perform HTTP PUT request (path: nodes/destiny/network) - Reason: Put "https://x.x.x.x:8006/api2/json/nodes/destiny/network": context deadline exceeded
╵
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4003["destiny"],
│   on main.tf line 82, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4003":
│   82: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4003" {
│ 
│ Could not reload network configuration on node 'destiny', unexpected error: failed to reload network configuration for node "destiny": All attempts fail:
│ #1: failed to perform HTTP PUT request (path: nodes/destiny/network) - Reason: Put "https://x.x.x.x:8006/api2/json/nodes/destiny/network": context deadline exceeded
╵
╷
│ Error: Error reloading network configuration
│ 
│   with proxmox_virtual_environment_network_linux_vlan.bond0_4004["destiny"],
│   on main.tf line 110, in resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4004":
│  110: resource "proxmox_virtual_environment_network_linux_vlan" "bond0_4004" {
│ 
│ Could not reload network configuration on node 'destiny', unexpected error: failed to reload network configuration for node "destiny": All attempts fail:
│ #1: failed to perform HTTP PUT request (path: nodes/destiny/network) - Reason: Put "https://x.x.x.x:8006/api2/json/nodes/destiny/network": context deadline exceeded"

@bpg
Copy link
Owner

bpg commented Dec 5, 2024

Hey @NiclasPe 👋🏼

Terraform can't "postpone" applying the configuration in that way, as each resource is treated independently and is applied as a whole or not at all.

I think the issue you're facing is the parallelism of network changes during execution. I see you have multiple interfaces defined for each of your nodes as separate resources, e.g., bond0_4001, bond0_4002, bond0_4003, bond0_4004. Since there are no dependencies between them, Terraform applies them in parallel, using the current parallelism setting, which I believe is 4 by default. This means four concurrent network changes are executed at roughly the same time on the same node, and each triggers a network restart at the end of the apply, increasing the likelihood of breaking things.

You have two options to mitigate this:

  1. Create a dependency chain for all vmbr* and bond0_* resources. For example, establish dependencies like vmbr0 -> vmbr1 -> ... and bond0_1 -> bond0_2 -> .... The specific order within the chain doesn't matter—it simply ensures Terraform processes these resources sequentially rather than in parallel. You can retain the existing vmbrX -> bondX dependency to ensure all "bond" resources are created first.

  2. Set parallelism to 1 when running apply.

Please let me know if that solves your issue.

@bpg bpg added the ⌛ pending author's response Requested additional information from the reporter label Dec 5, 2024
@bpg bpg moved this to On-Hold in terraform-provider-proxmox Dec 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working ⌛ pending author's response Requested additional information from the reporter
Projects
Status: ⏳ On-Hold
Development

No branches or pull requests

2 participants