Error: creating Managed Kubernetes Cluster "viya-tst-aks" (Resource Group "viya-tst-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="AgentPoolK8sVersionNotSupported" Message="Version 1.18.14 is not supported in this region. Please use [az aks get-versions] command to get the supported version list in this region. For more information, please check https://aka.ms/supported-version-list"
on modules/azure_aks/main.tf line 2, in resource "azurerm_kubernetes_cluster" "aks":
2: resource "azurerm_kubernetes_cluster" "aks" {
Run this Azure CLI command to get the supported Kubernetes versions in your Azure region and use value for kubernetes_version
variable in input tfvars.
az aks get-versions --location <YOUR_AZURE_LOCATION> --output table
There is a bug that has no real owner at this time that sometimes requires one to run the terraform destroy
command twice before all resources are removed from terraform.
Here is a sample of the error:
Error: waiting for the deletion of Node Pool "stateful" (Managed Kubernetes Cluster "viya-tst1-aks" / Resource Group "viya-tst1-rg"): Code="Canceled" Message="The operation was overriden and canceled by a later operation REDACTED."
Error: A resource with the ID "/subscriptions/REDACTED/resourcegroups/viya-tst-rg/providers/Microsoft.ContainerService/managedClusters/viya-tst-aks/agentPools/stateless" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster_node_pool" for more information.
terraform import -var-file=sample-input.tfvars -state=terraform.tfstate module.node_pools[\"stateless\"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0] "/subscriptions/REDACTED/resourceGroups/viya-tst-rg/providers/Microsoft.ContainerService/managedClusters/viya-tst-aks/agentPools/stateless"
Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'REDACTED' with object id 'REDACTED' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/REDACTED/resourceGroups/viya-tst-rg/providers/Microsoft.ContainerRegistry/registries/viyatstacr/providers/Microsoft.Authorization/roleAssignments/REDACTED' or the scope is invalid. If access was recently granted, please refresh your credentials."
on modules/azurerm_container_registry/main.tf line 18, in resource "azurerm_role_assignment" "acr":
18: resource "azurerm_role_assignment" "acr" {
Check values of environment variables - ARM_* and TF_*
Error: Error creating NetApp Account "sse-vdsdp-ha1-netappaccount" (Resource Group "sse-vdsdp-ha1-rg"): netappre sending request: StatusCode=404 -- Original Error: Code="InvalidResourceType" Message="The resource type cocrosoft.NetApp' for api version '2019-10-01'."
on modules/azurerm_netapp/main.tf line 29, in resource "azurerm_netapp_account" "anf":
29: resource "azurerm_netapp_account" "anf" {
Check your Azure Subscription has been granted access to Azure NetApp Files service: Azure Netapp Quickstart
In event of SAS Viya Platform deployment shutdown on an AKS cluster with Azure NetApp NFSv3 volume, the file locks persist and sas-consul-server
cannot access raft.db until the file locks are broken.
There are two options to avoid this issue:
-
Break the file locks from Azure Portal. For details see Troubleshoot file locks on an Azure NetApp Files volume.
-
Use Azure NetApp NFS volume version 4.1. Update to the latest version of
sassoftware/viya4-iac-azure
to use NFSv4.1 by default. If you are using sassoftware/viya4-iac-azure's release v7.2.0 or before, then add the variablenetapp_protocols
to your terraform.tfvars to switch to NFSv4.1.Note: Changing this on existing cluster will result in data loss.
Example:
# Storage HA storage_type = "ha" netapp_protocols = ["NFSv4.1"]