Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - vultr_kubernetes fails to register cluster deletion #313

Open
johnjmartin opened this issue Jan 19, 2023 · 4 comments
Open

[BUG] - vultr_kubernetes fails to register cluster deletion #313

johnjmartin opened this issue Jan 19, 2023 · 4 comments
Assignees
Labels

Comments

@johnjmartin
Copy link

Describe the bug
Sometimes, when a vultr k8s cluster gets deleted - either manually or when applying a destructive terraform update, the terraform provider does not properly register the deletion. This causes terraform to fail when updating state.

To Reproduce
Steps to reproduce the behavior:

  1. Manually create a vultr k8s cluster outside of tf
  2. add a vultr k8s cluster to tf with the same name
  3. Apply this change, allow vultr to replace the k8s cluster:
$ terraform apply
  # vultr_kubernetes.sjc must be replaced
-/+ resource "vultr_kubernetes" "sjc" {
      ~ cluster_subnet = "10.244.0.0/16" -> (
      + region         = "sjc" # forces replacement
... 
  1. See error
vultr_kubernetes.sjc: Destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8]
vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 10s elapsed]
vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 20s elapsed]
vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 30s elapsed]
vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 40s elapsed]
╷
│ Error: error deleting VKE c3b49514-745e-4fa5-b60e-8c3f19d128a8 : gave up after 4 attempts, last error: "{\"error\":\"Internal server error.\",\"status\":500}"
  1. continue to get errors on all terraform plans/apply from now on:
$ tf plan -target=vultr_kubernetes.sjc                                                                                                                                                                            
│ Error: error getting cluster (c3b49514-745e-4fa5-b60e-8c3f19d128a8): gave up after 4 attempts, last error: "{\"error\":\"Internal server error.\",\"status\":500}"

Expected behavior
I expect vultr to properly tear down and stand up the new k8s cluster.

Versions

tf --version                                                                                                                                                                                       
Terraform v1.3.7
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.50.0
+ provider registry.terraform.io/vultr/vultr v2.12.0
@optik-aper
Copy link
Member

@johnjmartin what do you mean in 2? A cluster created in my.vultr and through terraform will have different IDs and unless you import the cluster created outside of terraform they're two separate clusters. Are you using terraform import to bring it into terraform state?

@johnjmartin
Copy link
Author

Yes, we used terraform import to try to bring the cluster into the tf state. However, the import did not work completely: in step 3. vultr still wanted to replace the resource.

@optik-aper
Copy link
Member

Thanks for the clarification. I'll test this out today

@optik-aper optik-aper self-assigned this Jan 24, 2023
@johnjmartin
Copy link
Author

FYI I ended up resolving this by manually removing the invalid clusters from my terraform state file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants